Feb 03 10:02:08 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 10:02:08 crc restorecon[4672]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:08 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 10:02:09 crc restorecon[4672]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 10:02:10 crc kubenswrapper[5010]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.258026 5010 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268168 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268258 5010 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268264 5010 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268270 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268275 5010 feature_gate.go:330] unrecognized feature gate: Example Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268280 5010 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268286 5010 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268291 5010 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268295 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268303 5010 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268311 5010 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268331 5010 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268336 5010 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268342 5010 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268349 5010 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268354 5010 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268359 5010 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268363 5010 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268368 5010 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268372 5010 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268377 5010 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268381 5010 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268386 5010 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268391 5010 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268403 5010 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268407 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268412 5010 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268416 5010 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268420 5010 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268426 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268431 5010 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268435 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268439 5010 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268444 5010 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268448 5010 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268453 5010 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268457 5010 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268462 5010 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268467 5010 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268472 5010 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268476 5010 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268481 5010 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268487 5010 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268494 5010 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268501 5010 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268506 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268510 5010 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268514 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268519 5010 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268523 5010 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268527 5010 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268531 5010 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268535 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268541 5010 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268546 5010 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268555 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268559 5010 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268564 5010 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268568 5010 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268572 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268576 5010 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268581 5010 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268585 5010 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268590 5010 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268594 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268599 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268603 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268607 5010 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268612 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268616 5010 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.268621 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270492 5010 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270523 5010 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270535 5010 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270546 5010 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270556 5010 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270563 5010 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270572 5010 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270580 5010 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270586 5010 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270593 5010 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270599 5010 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270607 5010 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270614 5010 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270620 5010 flags.go:64] FLAG: --cgroup-root="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270626 5010 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270632 5010 flags.go:64] FLAG: --client-ca-file="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270638 5010 flags.go:64] FLAG: --cloud-config="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270644 5010 flags.go:64] FLAG: --cloud-provider="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270651 5010 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270662 5010 flags.go:64] FLAG: --cluster-domain="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270670 5010 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270677 5010 flags.go:64] FLAG: --config-dir="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270685 5010 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270692 5010 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270701 5010 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270706 5010 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270713 5010 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270718 5010 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270724 5010 flags.go:64] FLAG: --contention-profiling="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270730 5010 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270736 5010 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270742 5010 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270747 5010 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270755 5010 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270761 5010 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270766 5010 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270772 5010 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270779 5010 flags.go:64] FLAG: --enable-server="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270784 5010 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270793 5010 flags.go:64] FLAG: --event-burst="100" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270799 5010 flags.go:64] FLAG: --event-qps="50" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270804 5010 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270810 5010 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270815 5010 flags.go:64] FLAG: --eviction-hard="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270823 5010 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270828 5010 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270834 5010 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270841 5010 flags.go:64] FLAG: --eviction-soft="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270847 5010 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270853 5010 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270859 5010 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270865 5010 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270871 5010 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270878 5010 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270884 5010 flags.go:64] FLAG: --feature-gates="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270891 5010 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270897 5010 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270903 5010 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270909 5010 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270915 5010 flags.go:64] FLAG: --healthz-port="10248" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270920 5010 flags.go:64] FLAG: --help="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270925 5010 flags.go:64] FLAG: --hostname-override="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270930 5010 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270936 5010 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270941 5010 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270946 5010 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270951 5010 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270959 5010 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270964 5010 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270969 5010 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270974 5010 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270980 5010 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270986 5010 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270991 5010 flags.go:64] FLAG: --kube-reserved="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.270996 5010 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271001 5010 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271007 5010 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271012 5010 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271018 5010 flags.go:64] FLAG: --lock-file="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271023 5010 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271028 5010 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271034 5010 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271069 5010 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271076 5010 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271081 5010 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271089 5010 flags.go:64] FLAG: --logging-format="text" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271095 5010 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271101 5010 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271106 5010 flags.go:64] FLAG: --manifest-url="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271112 5010 flags.go:64] FLAG: --manifest-url-header="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271120 5010 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271126 5010 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271133 5010 flags.go:64] FLAG: --max-pods="110" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271139 5010 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271145 5010 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271151 5010 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271157 5010 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271162 5010 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271167 5010 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271173 5010 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271190 5010 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271195 5010 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271201 5010 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271206 5010 flags.go:64] FLAG: --pod-cidr="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271236 5010 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271246 5010 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271251 5010 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271257 5010 flags.go:64] FLAG: --pods-per-core="0" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271262 5010 flags.go:64] FLAG: --port="10250" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271269 5010 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271274 5010 flags.go:64] FLAG: --provider-id="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271279 5010 flags.go:64] FLAG: --qos-reserved="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271285 5010 flags.go:64] FLAG: --read-only-port="10255" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271290 5010 flags.go:64] FLAG: --register-node="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271296 5010 flags.go:64] FLAG: --register-schedulable="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271301 5010 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271313 5010 flags.go:64] FLAG: --registry-burst="10" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271319 5010 flags.go:64] FLAG: --registry-qps="5" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271365 5010 flags.go:64] FLAG: --reserved-cpus="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271373 5010 flags.go:64] FLAG: --reserved-memory="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271381 5010 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271387 5010 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271393 5010 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271398 5010 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271403 5010 flags.go:64] FLAG: --runonce="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271409 5010 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271414 5010 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271420 5010 flags.go:64] FLAG: --seccomp-default="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271425 5010 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271431 5010 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271436 5010 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271442 5010 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271448 5010 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271454 5010 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271459 5010 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271464 5010 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271470 5010 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271475 5010 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271481 5010 flags.go:64] FLAG: --system-cgroups="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271486 5010 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271500 5010 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271505 5010 flags.go:64] FLAG: --tls-cert-file="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271510 5010 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271519 5010 flags.go:64] FLAG: --tls-min-version="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271524 5010 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271529 5010 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271543 5010 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271548 5010 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271554 5010 flags.go:64] FLAG: --v="2" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271569 5010 flags.go:64] FLAG: --version="false" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271580 5010 flags.go:64] FLAG: --vmodule="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271588 5010 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.271594 5010 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271782 5010 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271790 5010 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271796 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271801 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271806 5010 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271811 5010 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271817 5010 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271822 5010 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271826 5010 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271830 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271835 5010 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271839 5010 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271844 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271849 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271853 5010 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271857 5010 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271862 5010 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271866 5010 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271871 5010 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271875 5010 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271879 5010 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271883 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271887 5010 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271892 5010 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271896 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271902 5010 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271907 5010 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271912 5010 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271918 5010 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271925 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271931 5010 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271935 5010 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271941 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271947 5010 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271952 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271957 5010 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271961 5010 feature_gate.go:330] unrecognized feature gate: Example Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271966 5010 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271972 5010 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271977 5010 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271982 5010 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271987 5010 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271992 5010 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.271997 5010 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272002 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272007 5010 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272011 5010 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272016 5010 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272021 5010 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272026 5010 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272030 5010 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272035 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272040 5010 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272044 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272050 5010 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272054 5010 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272059 5010 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272064 5010 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272069 5010 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272073 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272078 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272083 5010 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272089 5010 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272094 5010 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272099 5010 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272104 5010 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272109 5010 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272116 5010 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272121 5010 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272127 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.272133 5010 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.272150 5010 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.281742 5010 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.281812 5010 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281905 5010 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281917 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281925 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281931 5010 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281937 5010 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281942 5010 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281948 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281953 5010 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281958 5010 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281965 5010 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281974 5010 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281981 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281988 5010 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.281996 5010 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282003 5010 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282008 5010 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282014 5010 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282019 5010 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282024 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282029 5010 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282035 5010 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282040 5010 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282047 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282052 5010 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282057 5010 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282061 5010 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282065 5010 feature_gate.go:330] unrecognized feature gate: Example Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282070 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282075 5010 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282081 5010 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282089 5010 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282095 5010 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282099 5010 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282105 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282109 5010 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282114 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282118 5010 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282123 5010 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282128 5010 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282132 5010 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282136 5010 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282140 5010 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282146 5010 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282152 5010 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282157 5010 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282162 5010 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282167 5010 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282171 5010 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282176 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282180 5010 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282184 5010 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282188 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282192 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282196 5010 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282200 5010 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282204 5010 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282208 5010 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282233 5010 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282238 5010 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282244 5010 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282249 5010 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282253 5010 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282261 5010 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282266 5010 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282271 5010 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282276 5010 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282280 5010 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282284 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282289 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282293 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282297 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.282306 5010 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282460 5010 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282469 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282474 5010 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282478 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282483 5010 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282488 5010 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282494 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282498 5010 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282503 5010 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282507 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282512 5010 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282516 5010 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282520 5010 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282526 5010 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282532 5010 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282536 5010 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282541 5010 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282545 5010 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282550 5010 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282554 5010 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282559 5010 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282564 5010 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282568 5010 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282574 5010 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282578 5010 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282582 5010 feature_gate.go:330] unrecognized feature gate: Example Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282586 5010 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282590 5010 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282595 5010 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282600 5010 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282604 5010 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282608 5010 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282613 5010 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282618 5010 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282622 5010 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282626 5010 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282630 5010 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282634 5010 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282641 5010 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282646 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282652 5010 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282657 5010 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282662 5010 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282667 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282671 5010 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282676 5010 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282680 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282684 5010 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282688 5010 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282692 5010 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282696 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282701 5010 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282705 5010 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282709 5010 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282713 5010 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282719 5010 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282724 5010 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282728 5010 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282732 5010 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282737 5010 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282742 5010 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282746 5010 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282751 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282755 5010 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282759 5010 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282765 5010 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282769 5010 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282773 5010 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282778 5010 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282782 5010 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.282787 5010 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.282794 5010 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.283107 5010 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.288287 5010 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.288430 5010 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.290776 5010 server.go:997] "Starting client certificate rotation" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.290813 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.291050 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-08 02:23:25.309599455 +0000 UTC Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.291192 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.319047 5010 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.322842 5010 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.323573 5010 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.347662 5010 log.go:25] "Validated CRI v1 runtime API" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.384287 5010 log.go:25] "Validated CRI v1 image API" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.386133 5010 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.392265 5010 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-09-57-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.392307 5010 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.407156 5010 manager.go:217] Machine: {Timestamp:2026-02-03 10:02:10.404570033 +0000 UTC m=+0.560546172 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:83993284-2ce8-4ad1-9fe3-91205d527513 BootID:5c3370a1-7640-4a44-9e90-cab33c833dc6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:06:ab Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:06:ab Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3a:32:8d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8b:04:69 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:81:5a:c4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8a:17:18 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:3f:06:5e:6f:7d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:ce:4d:c3:3f:dc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.407489 5010 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.407626 5010 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.408874 5010 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.409201 5010 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.409307 5010 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.409665 5010 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.409685 5010 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.410289 5010 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.410346 5010 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.410615 5010 state_mem.go:36] "Initialized new in-memory state store" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.410763 5010 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.415007 5010 kubelet.go:418] "Attempting to sync node with API server" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.415042 5010 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.415068 5010 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.415091 5010 kubelet.go:324] "Adding apiserver pod source" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.415110 5010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.419607 5010 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.420404 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.420476 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.420432 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.420517 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.420638 5010 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.423542 5010 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425117 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425141 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425148 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425155 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425165 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425172 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425178 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425188 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425196 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425203 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425226 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.425234 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.426990 5010 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.427456 5010 server.go:1280] "Started kubelet" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.428472 5010 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 10:02:10 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.428588 5010 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.431549 5010 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.431775 5010 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.434725 5010 server.go:460] "Adding debug handlers to kubelet server" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.437133 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.437167 5010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.437204 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:05:42.773011635 +0000 UTC Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.438806 5010 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.438886 5010 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.438900 5010 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.439045 5010 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.439169 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="200ms" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.439536 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.439877 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.439065 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890b454ee73dc0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:02:10.427427852 +0000 UTC m=+0.583403981,LastTimestamp:2026-02-03 10:02:10.427427852 +0000 UTC m=+0.583403981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441137 5010 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441192 5010 factory.go:55] Registering systemd factory Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441261 5010 factory.go:221] Registration of the systemd container factory successfully Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441671 5010 factory.go:153] Registering CRI-O factory Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441706 5010 factory.go:221] Registration of the crio container factory successfully Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441740 5010 factory.go:103] Registering Raw factory Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.441761 5010 manager.go:1196] Started watching for new ooms in manager Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.442676 5010 manager.go:319] Starting recovery of all containers Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449733 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449789 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449799 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449810 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449819 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449829 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449837 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449846 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449862 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449884 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449897 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449911 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449923 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449936 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449946 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449958 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449969 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449979 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449990 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.449999 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450009 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450022 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450031 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450039 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450047 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450056 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450067 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450077 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450086 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450095 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450143 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450153 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450163 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450172 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450181 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450190 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450199 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450224 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450236 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450245 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450255 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450265 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450275 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450284 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450294 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450304 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450316 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450329 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450342 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450355 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450368 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450378 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450393 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450406 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450416 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450426 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450436 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450447 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450455 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450464 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450473 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450483 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450492 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450501 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450511 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450520 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450529 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450537 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450547 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450556 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450564 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450574 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450582 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450592 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450602 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450611 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450619 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.450629 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452457 5010 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452495 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452520 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452536 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452549 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452563 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452576 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452591 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452612 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452629 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452642 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452654 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452668 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452681 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452694 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452707 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452723 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452736 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452750 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452763 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452777 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452790 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452804 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452814 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452824 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452834 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452844 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452860 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452871 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452881 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452892 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452903 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452915 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452926 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452937 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452955 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.452980 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453014 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453036 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453051 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453067 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453080 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453093 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453108 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453126 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453148 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453160 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453175 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453188 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453201 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453232 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453243 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453254 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453263 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453274 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453284 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453294 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453304 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453318 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453333 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453346 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453358 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453371 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453385 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453397 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453409 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453424 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453435 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453445 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453456 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453470 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453482 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453495 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453507 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453520 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453541 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453555 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453567 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453578 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453590 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453599 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453609 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453628 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453637 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453647 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453657 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453669 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453678 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453687 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453697 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453708 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453719 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453729 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453741 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453751 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453761 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453771 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453781 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453791 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453800 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453810 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453820 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453830 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453839 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453849 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453858 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453867 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453880 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453889 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453898 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453908 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453920 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453932 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453942 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453951 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453961 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453970 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453980 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.453990 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454000 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454010 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454019 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454028 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454038 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454047 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454056 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454066 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454076 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454086 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454096 5010 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454106 5010 reconstruct.go:97] "Volume reconstruction finished" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.454113 5010 reconciler.go:26] "Reconciler: start to sync state" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.467447 5010 manager.go:324] Recovery completed Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.482026 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.483725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.483772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.483789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.484752 5010 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.484777 5010 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.484807 5010 state_mem.go:36] "Initialized new in-memory state store" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.498430 5010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.500525 5010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.500660 5010 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.500801 5010 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.500995 5010 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.502883 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.502965 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.513047 5010 policy_none.go:49] "None policy: Start" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.515050 5010 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.515236 5010 state_mem.go:35] "Initializing new in-memory state store" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.540916 5010 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.573200 5010 manager.go:334] "Starting Device Plugin manager" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.573285 5010 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.573300 5010 server.go:79] "Starting device plugin registration server" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.573951 5010 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.573981 5010 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.574359 5010 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.574544 5010 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.574605 5010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.580591 5010 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.602006 5010 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.602153 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.603989 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.604053 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.604065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.604346 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.604540 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.604610 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605622 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.605890 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606151 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606234 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.606972 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607202 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607304 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607829 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.607931 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608232 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608331 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608368 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608396 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608441 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.608454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609408 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609419 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609718 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.609744 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.610440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.610469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.610481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.640722 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="400ms" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.655844 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.655922 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.655951 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.655988 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656069 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656166 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656225 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656285 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656319 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656355 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656397 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656455 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656482 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.656506 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.674779 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.677353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.677401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.677413 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.677447 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.680178 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.757685 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758183 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.757833 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758293 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758319 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758330 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758361 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758361 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758393 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758385 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758427 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758459 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758545 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758561 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758565 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758590 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758594 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758622 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758668 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758674 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758650 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758695 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758704 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.758818 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.881356 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.883189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.883266 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.883285 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.883323 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:10 crc kubenswrapper[5010]: E0203 10:02:10.883925 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.930631 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.940443 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.961687 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.980400 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: I0203 10:02:10.984533 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.989982 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4bce108908948fc1a12d70ba94e0d9c1d2601cc5192d9d59bf3b3856c158f3d3 WatchSource:0}: Error finding container 4bce108908948fc1a12d70ba94e0d9c1d2601cc5192d9d59bf3b3856c158f3d3: Status 404 returned error can't find the container with id 4bce108908948fc1a12d70ba94e0d9c1d2601cc5192d9d59bf3b3856c158f3d3 Feb 03 10:02:10 crc kubenswrapper[5010]: W0203 10:02:10.992862 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-538004cba143a0b0171b2f7bc156f42c2533a4109bc0b83818251b023730d7fe WatchSource:0}: Error finding container 538004cba143a0b0171b2f7bc156f42c2533a4109bc0b83818251b023730d7fe: Status 404 returned error can't find the container with id 538004cba143a0b0171b2f7bc156f42c2533a4109bc0b83818251b023730d7fe Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.004429 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5e7f094656a7bebbc451064d22a9cff9681acc516177ae4ee22f30351e7b934c WatchSource:0}: Error finding container 5e7f094656a7bebbc451064d22a9cff9681acc516177ae4ee22f30351e7b934c: Status 404 returned error can't find the container with id 5e7f094656a7bebbc451064d22a9cff9681acc516177ae4ee22f30351e7b934c Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.005087 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cafd9426e400a85c95b2545e2b2f2fb1a37fe1a76ce9b9c589bfdf161a32262f WatchSource:0}: Error finding container cafd9426e400a85c95b2545e2b2f2fb1a37fe1a76ce9b9c589bfdf161a32262f: Status 404 returned error can't find the container with id cafd9426e400a85c95b2545e2b2f2fb1a37fe1a76ce9b9c589bfdf161a32262f Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.008453 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6812d7e3842b7d1a72865454a54173c51a996d38bf397084becb6ce93d1ab4fa WatchSource:0}: Error finding container 6812d7e3842b7d1a72865454a54173c51a996d38bf397084becb6ce93d1ab4fa: Status 404 returned error can't find the container with id 6812d7e3842b7d1a72865454a54173c51a996d38bf397084becb6ce93d1ab4fa Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.042146 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="800ms" Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.200424 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890b454ee73dc0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:02:10.427427852 +0000 UTC m=+0.583403981,LastTimestamp:2026-02-03 10:02:10.427427852 +0000 UTC m=+0.583403981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.284333 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.285555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.285643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.285663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.285706 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.286450 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.327803 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.327920 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.433033 5010 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.437473 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:30:23.923182541 +0000 UTC Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.437879 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.437985 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.507796 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6812d7e3842b7d1a72865454a54173c51a996d38bf397084becb6ce93d1ab4fa"} Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.511497 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e7f094656a7bebbc451064d22a9cff9681acc516177ae4ee22f30351e7b934c"} Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.512931 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cafd9426e400a85c95b2545e2b2f2fb1a37fe1a76ce9b9c589bfdf161a32262f"} Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.514413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"538004cba143a0b0171b2f7bc156f42c2533a4109bc0b83818251b023730d7fe"} Feb 03 10:02:11 crc kubenswrapper[5010]: I0203 10:02:11.515531 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4bce108908948fc1a12d70ba94e0d9c1d2601cc5192d9d59bf3b3856c158f3d3"} Feb 03 10:02:11 crc kubenswrapper[5010]: W0203 10:02:11.682453 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.682599 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:11 crc kubenswrapper[5010]: E0203 10:02:11.843903 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="1.6s" Feb 03 10:02:12 crc kubenswrapper[5010]: W0203 10:02:12.042531 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:12 crc kubenswrapper[5010]: E0203 10:02:12.042941 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.086586 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.088126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.088189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.088202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.088271 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:12 crc kubenswrapper[5010]: E0203 10:02:12.088778 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.415474 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 10:02:12 crc kubenswrapper[5010]: E0203 10:02:12.416768 5010 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.433389 5010 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.438517 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:17:09.581871146 +0000 UTC Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.521266 5010 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db" exitCode=0 Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.521380 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.521431 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.522730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.522776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.522789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.525619 5010 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a" exitCode=0 Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.525715 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.525764 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.527531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.527564 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.527581 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.528916 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" exitCode=0 Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.528985 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.529050 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.530074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.530123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.530144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.532451 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.532496 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.532513 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.532515 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.532527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.533738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.533799 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.533734 5010 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46" exitCode=0 Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.533815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.533793 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46"} Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.534012 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.534942 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535102 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535740 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.535750 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:12 crc kubenswrapper[5010]: I0203 10:02:12.933905 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.432808 5010 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.438971 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:05:33.363648225 +0000 UTC Feb 03 10:02:13 crc kubenswrapper[5010]: E0203 10:02:13.444763 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="3.2s" Feb 03 10:02:13 crc kubenswrapper[5010]: W0203 10:02:13.497586 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:13 crc kubenswrapper[5010]: E0203 10:02:13.497695 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.519065 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.539394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.539452 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.540425 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.540460 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.540473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543236 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543273 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543288 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543300 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543312 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.543424 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.544163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.544190 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.544202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.546013 5010 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e" exitCode=0 Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.546073 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.546180 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.546970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.546994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.547005 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.553370 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.553441 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554351 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554368 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e"} Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554680 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.554775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.689660 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.690757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.690795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.690807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:13 crc kubenswrapper[5010]: I0203 10:02:13.690830 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:13 crc kubenswrapper[5010]: E0203 10:02:13.691285 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Feb 03 10:02:13 crc kubenswrapper[5010]: W0203 10:02:13.963608 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:13 crc kubenswrapper[5010]: E0203 10:02:13.963783 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:14 crc kubenswrapper[5010]: W0203 10:02:14.001695 5010 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Feb 03 10:02:14 crc kubenswrapper[5010]: E0203 10:02:14.001785 5010 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.439414 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:23:06.262077801 +0000 UTC Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.558904 5010 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860" exitCode=0 Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559052 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559075 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559134 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860"} Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559243 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559322 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559341 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.559400 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560758 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560785 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.560969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561122 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561574 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.561592 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.562805 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.562841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:14 crc kubenswrapper[5010]: I0203 10:02:14.562857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.440409 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:04:50.331295884 +0000 UTC Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567456 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"32bb7e23791044ac62b774a809eefec90c37195581f3a062ec0328a0f3156771"} Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9336946ed9378970e4cf4204dae54c84331a56d8bb0c34a96a18756a03564c2d"} Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567525 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3acf4d9a81d55d48408fc220d27652171a691f91f84894a35677f27f1ea9beaf"} Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567534 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33c6da9549a593611fce2b9ac2e1730afa277e407ab3d553648c86cca72df9dd"} Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567544 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64179c9dc656cd2ae54ef87a2dd73427521252105f7f7db946b69951cf308654"} Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.567627 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.568705 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.568754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.568768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.737976 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.738359 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.740349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.740424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.740450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:15 crc kubenswrapper[5010]: I0203 10:02:15.848136 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.440968 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:07:00.131931602 +0000 UTC Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.446442 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.519990 5010 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.520097 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.532151 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.570100 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.571109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.571156 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.571174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.696050 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.696286 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.696332 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.697807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.697856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.697868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.892081 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.893965 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.894027 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.894045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:16 crc kubenswrapper[5010]: I0203 10:02:16.894083 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.441174 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:38:42.318478958 +0000 UTC Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.572296 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.573329 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.573395 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.573413 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.799950 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.800287 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.800373 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.801870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.801922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:17 crc kubenswrapper[5010]: I0203 10:02:17.801938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:18 crc kubenswrapper[5010]: I0203 10:02:18.441570 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:03:34.218256326 +0000 UTC Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.013778 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.014084 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.015725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.015781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.015800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.317838 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.318137 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.320328 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.320390 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.320417 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:19 crc kubenswrapper[5010]: I0203 10:02:19.442502 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:33:25.069983636 +0000 UTC Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.443281 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:20:15.827868904 +0000 UTC Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.535958 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.536152 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.537458 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.537507 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.537518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.541845 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.579483 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:20 crc kubenswrapper[5010]: E0203 10:02:20.581238 5010 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.581438 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.581533 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:20 crc kubenswrapper[5010]: I0203 10:02:20.581560 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:21 crc kubenswrapper[5010]: I0203 10:02:21.444382 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:15:47.183786577 +0000 UTC Feb 03 10:02:22 crc kubenswrapper[5010]: I0203 10:02:22.445384 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:24:02.682428937 +0000 UTC Feb 03 10:02:23 crc kubenswrapper[5010]: I0203 10:02:23.446314 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:44:22.436985883 +0000 UTC Feb 03 10:02:24 crc kubenswrapper[5010]: I0203 10:02:24.324450 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 10:02:24 crc kubenswrapper[5010]: I0203 10:02:24.324510 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 10:02:24 crc kubenswrapper[5010]: I0203 10:02:24.327752 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 10:02:24 crc kubenswrapper[5010]: I0203 10:02:24.327798 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 10:02:24 crc kubenswrapper[5010]: I0203 10:02:24.446960 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:26:14.674782033 +0000 UTC Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.447647 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:52:55.067442293 +0000 UTC Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.742380 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.742512 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.743525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.743567 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:25 crc kubenswrapper[5010]: I0203 10:02:25.743576 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.447879 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:19:50.952596642 +0000 UTC Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.470617 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.470936 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.472204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.472276 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.472291 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.483028 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.520301 5010 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.520373 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.600900 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.601933 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.601957 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:26 crc kubenswrapper[5010]: I0203 10:02:26.601965 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.448298 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:28:58.209691668 +0000 UTC Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.805158 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.805367 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.806626 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.806665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.806679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:27 crc kubenswrapper[5010]: I0203 10:02:27.811951 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:28 crc kubenswrapper[5010]: I0203 10:02:28.448397 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:00:01.792837633 +0000 UTC Feb 03 10:02:28 crc kubenswrapper[5010]: I0203 10:02:28.605361 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:28 crc kubenswrapper[5010]: I0203 10:02:28.606120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:28 crc kubenswrapper[5010]: I0203 10:02:28.606149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:28 crc kubenswrapper[5010]: I0203 10:02:28.606161 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.318315 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.322147 5010 trace.go:236] Trace[686246960]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:19.002) (total time: 10319ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[686246960]: ---"Objects listed" error: 10319ms (10:02:29.322) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[686246960]: [10.319145632s] [10.319145632s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.322173 5010 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.323866 5010 trace.go:236] Trace[1504449500]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:19.185) (total time: 10138ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1504449500]: ---"Objects listed" error: 10138ms (10:02:29.323) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1504449500]: [10.138584187s] [10.138584187s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.324035 5010 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.325101 5010 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325390 5010 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325448 5010 trace.go:236] Trace[1215420072]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:14.586) (total time: 14739ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1215420072]: ---"Objects listed" error: 14738ms (10:02:29.325) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[1215420072]: [14.739022883s] [14.739022883s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.325463 5010 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.329021 5010 trace.go:236] Trace[892539971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 10:02:16.735) (total time: 12593ms): Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[892539971]: ---"Objects listed" error: 12593ms (10:02:29.328) Feb 03 10:02:29 crc kubenswrapper[5010]: Trace[892539971]: [12.593666934s] [12.593666934s] END Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.329054 5010 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.334372 5010 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362038 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33026->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362089 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33042->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362106 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33026->192.168.126.11:17697: read: connection reset by peer" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362148 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33042->192.168.126.11:17697: read: connection reset by peer" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362560 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362601 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362811 5010 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.362829 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.427109 5010 apiserver.go:52] "Watching apiserver" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430277 5010 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430578 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430929 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.430974 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431013 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431157 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431262 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431272 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431309 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.431532 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.431572 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.433625 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434108 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434260 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434268 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434389 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434692 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.434937 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.435168 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.435352 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.439488 5010 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.448657 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:01:35.855834024 +0000 UTC Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.458249 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.470812 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.482183 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.494433 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.505534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.515499 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.525078 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526472 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526508 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526529 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526551 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526573 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526613 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526637 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526675 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526724 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526811 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526865 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526899 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526915 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526944 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526962 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526979 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526974 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527010 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526972 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.526997 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527532 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527585 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527750 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.527773 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528272 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528350 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528411 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528530 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528929 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528973 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.528985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529179 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.529173 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530672 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530714 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530781 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530803 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530825 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530872 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530911 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530933 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530954 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530971 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.530992 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531093 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531134 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531293 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531424 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531651 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531679 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.531758 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532610 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532844 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.532865 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533077 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533189 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533242 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533265 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533290 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533317 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533386 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533455 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533564 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533380 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534167 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534187 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534295 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534384 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534550 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534619 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534667 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.533930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.534960 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535149 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536193 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536226 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536297 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536316 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536332 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536347 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536393 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536410 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536427 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536460 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536476 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536494 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536511 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536566 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536597 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536614 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536631 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536648 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536683 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536699 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536716 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536732 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536752 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536767 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536799 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536814 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.535956 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537247 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536161 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536450 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536462 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537300 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536476 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536695 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536949 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.536999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.537134 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.037108389 +0000 UTC m=+20.193084558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537172 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537353 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537720 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.537835 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538133 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538101 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538303 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538489 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539018 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539190 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538607 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.538966 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539265 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539070 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539091 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539447 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539166 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539272 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539613 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539639 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539661 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539706 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539764 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539809 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539851 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539916 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.539978 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540047 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540074 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540096 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540117 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540139 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540186 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540251 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540275 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540298 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540376 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540412 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540455 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540474 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540520 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540540 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540557 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540576 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540610 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540604 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540644 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540707 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540794 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540851 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540879 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540901 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540898 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540919 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540926 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.540971 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541079 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541106 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541108 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541130 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541179 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541229 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541254 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541276 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541302 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541310 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541162 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541330 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541387 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541398 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541431 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541448 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541482 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541538 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541553 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541568 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541584 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541602 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541619 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541635 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541667 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541700 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541732 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541764 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541782 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541798 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541814 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541846 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.541862 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542094 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542110 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542127 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542143 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542159 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542176 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542261 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542278 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542327 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542477 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542885 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542858 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542913 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542924 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.542966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543000 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543090 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543165 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543340 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543368 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543450 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543535 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543552 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543593 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543609 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543621 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543635 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543692 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543707 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543745 5010 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543761 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543777 5010 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543791 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543831 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543847 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543862 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543875 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543912 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543925 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543939 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543952 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543990 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544038 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544078 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544094 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544108 5010 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544123 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545345 5010 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545390 5010 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545412 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545425 5010 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545436 5010 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545459 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545470 5010 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545487 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545499 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545510 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545522 5010 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545533 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545550 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545564 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545575 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545585 5010 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545596 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545612 5010 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545622 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545635 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545648 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545661 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545674 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545686 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545698 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545711 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545725 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545738 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545750 5010 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545763 5010 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545776 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545788 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545801 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545813 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545825 5010 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545838 5010 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545853 5010 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545868 5010 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545883 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545944 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545957 5010 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545970 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545982 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545997 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546010 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546023 5010 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546035 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546049 5010 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546060 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546073 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546085 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546099 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546113 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546125 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546138 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546151 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546163 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546176 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546190 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546202 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546230 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546250 5010 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546261 5010 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546273 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546285 5010 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546304 5010 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544498 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546670 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543000 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543011 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543277 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543306 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543728 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543827 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.543971 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544094 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544264 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544349 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544466 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.544612 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.547552 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.047531393 +0000 UTC m=+20.203507522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.544790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.544955 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.547808 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.548270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.547608 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.047599645 +0000 UTC m=+20.203575774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545266 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.548574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.545555 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546584 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.546922 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.547112 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549957 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.549972 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550032 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550131 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550195 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550550 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550793 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.550950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551297 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551306 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551494 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551696 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551717 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551797 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.551895 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552256 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552343 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552449 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.552658 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.553336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.559971 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560025 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560036 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.560096 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.060076182 +0000 UTC m=+20.216052371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.560628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562177 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562636 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.562717 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563385 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563602 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563682 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.563835 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564140 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564347 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564599 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.564762 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565147 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565157 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565411 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565475 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565702 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565765 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.565998 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566354 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566124 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566173 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566185 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566873 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.566908 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.567736 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568040 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568285 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.568585 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569737 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569887 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569893 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569922 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569931 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569836 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: E0203 10:02:29.569978 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:30.069954183 +0000 UTC m=+20.225930312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.569985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570026 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570242 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570419 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570756 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570752 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.570917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.571001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.571907 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572604 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.572678 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.574663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.576912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581310 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581348 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.581984 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.582113 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.582229 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.583406 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.584382 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.586470 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.592677 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.592732 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.597929 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.604577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.609897 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.612767 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" exitCode=255 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.612809 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b"} Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.618612 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.624067 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.624298 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.625546 5010 scope.go:117] "RemoveContainer" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.635648 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.645287 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647555 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647622 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647699 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647717 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647739 5010 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647767 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647779 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647852 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647865 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647875 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647890 5010 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647931 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647968 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648005 5010 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648015 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648026 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.647931 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648042 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648074 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648088 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648099 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648110 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648122 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648132 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648142 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648154 5010 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648164 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648174 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648184 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648196 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648206 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648232 5010 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648250 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648261 5010 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648279 5010 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648290 5010 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648301 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648312 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648323 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648334 5010 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648344 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648356 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648367 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648379 5010 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648391 5010 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648425 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648437 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648448 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648458 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648469 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648479 5010 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648490 5010 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648514 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648524 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648535 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648545 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648554 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648565 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648575 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648590 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648602 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648612 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648622 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648633 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648643 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648654 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648669 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648679 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648690 5010 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648714 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648724 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648735 5010 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648745 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648769 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648785 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648795 5010 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648807 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648816 5010 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648831 5010 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648842 5010 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648860 5010 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648870 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648879 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648889 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648906 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648916 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648928 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648939 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648949 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648959 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648968 5010 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648977 5010 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648987 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.648996 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649007 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649019 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649029 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649040 5010 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649055 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649066 5010 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649076 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649087 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649101 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649112 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.649121 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.654979 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.665845 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.679120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.745426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.755694 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.759832 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530 WatchSource:0}: Error finding container b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530: Status 404 returned error can't find the container with id b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530 Feb 03 10:02:29 crc kubenswrapper[5010]: I0203 10:02:29.763197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.765863 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3 WatchSource:0}: Error finding container ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3: Status 404 returned error can't find the container with id ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3 Feb 03 10:02:29 crc kubenswrapper[5010]: W0203 10:02:29.782054 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41 WatchSource:0}: Error finding container c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41: Status 404 returned error can't find the container with id c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41 Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.052657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.052814 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.052783162 +0000 UTC m=+21.208759291 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.053092 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.053124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053203 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053235 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.053306926 +0000 UTC m=+21.209283065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.053357 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.053347057 +0000 UTC m=+21.209323206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.153998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.154046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154170 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154186 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154196 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154252 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154300 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154314 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154272 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.154255529 +0000 UTC m=+21.310231658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.154442 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:31.154403932 +0000 UTC m=+21.310380061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.449106 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:38:29.527018959 +0000 UTC Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.501839 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:30 crc kubenswrapper[5010]: E0203 10:02:30.502186 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.505770 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.506271 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.507391 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.508557 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.510661 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.511925 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.513209 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.515168 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.516669 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.518852 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.519458 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521374 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521545 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.521856 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.522412 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.523373 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.523925 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.525062 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.525596 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.526295 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.527450 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.528327 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.528941 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.529455 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.530486 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.530918 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.531994 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.532670 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.533499 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.534069 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.534850 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.535364 5010 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.535458 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.537947 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.538542 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.539049 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.539997 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.542202 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.543285 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.543819 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.544799 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.545483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.546355 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.547070 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.549189 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.550134 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.550578 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.551456 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.551993 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.553335 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.553869 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.555053 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.555591 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.556182 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.559651 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.560146 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.562758 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.576889 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.598564 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.613744 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.616920 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.616991 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b94faaa7be1ba906251a3be62e01618ff7a6ccaa2622df7668ce5bab18f3e530"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.618867 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.620953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.621185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622667 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622743 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.622763 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c81094d4e1af07cffae7a24ee49d2644f5218dd6db650c315f27055b13e9cf41"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.623868 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ca37fe8c182aae1ca66969177c79bd19f2838340192c9967d4986dc47bdcb2f3"} Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.633423 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.653008 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.667122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.681116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.694742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.710702 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.723858 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:30 crc kubenswrapper[5010]: I0203 10:02:30.735523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061906 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.061936 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062025 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062027 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.061994647 +0000 UTC m=+23.217970776 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062089 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.062070989 +0000 UTC m=+23.218047198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062178 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.062377 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.062350686 +0000 UTC m=+23.218326805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.162928 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.163142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163109 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163345 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163405 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163495 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.163479404 +0000 UTC m=+23.319455533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163307 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163630 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163680 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.163767 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:33.163759341 +0000 UTC m=+23.319735470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.449700 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:02:23.022517086 +0000 UTC Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.501481 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:31 crc kubenswrapper[5010]: I0203 10:02:31.501489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.501613 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:31 crc kubenswrapper[5010]: E0203 10:02:31.501664 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.450082 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:44:07.818731972 +0000 UTC Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.501131 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:32 crc kubenswrapper[5010]: E0203 10:02:32.501286 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.630709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601"} Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.646111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.661969 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.676938 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.691109 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.710776 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.730419 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:32 crc kubenswrapper[5010]: I0203 10:02:32.751075 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:32Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081622 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081813 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.081839 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.081800051 +0000 UTC m=+27.237776310 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.081916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.081985 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082065 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082079 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.082053898 +0000 UTC m=+27.238030217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.082234 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.082195451 +0000 UTC m=+27.238171610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.183231 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.183284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183386 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183403 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183413 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183446 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183484 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183499 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183460 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.183447762 +0000 UTC m=+27.339423891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.183579 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:37.183560505 +0000 UTC m=+27.339536644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.450290 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:35:20.522623962 +0000 UTC Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.501266 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.501314 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.501498 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.501596 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.523541 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.526939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.533233 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.536187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.550089 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.562851 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.575800 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.590131 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.603178 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.615114 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.629501 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: E0203 10:02:33.641284 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.658836 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.669618 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.684041 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.695083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.704429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.715230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:33 crc kubenswrapper[5010]: I0203 10:02:33.726936 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:33Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:34 crc kubenswrapper[5010]: I0203 10:02:34.451465 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:21:47.538836261 +0000 UTC Feb 03 10:02:34 crc kubenswrapper[5010]: I0203 10:02:34.501726 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:34 crc kubenswrapper[5010]: E0203 10:02:34.501915 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.201489 5010 csr.go:261] certificate signing request csr-l7wqh is approved, waiting to be issued Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.248135 5010 csr.go:257] certificate signing request csr-l7wqh is issued Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.452102 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:01:20.531090237 +0000 UTC Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.501688 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.501738 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.501834 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.501895 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.726139 5010 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727891 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.727968 5010 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.733825 5010 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.734093 5010 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735225 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.735241 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.757949 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.760962 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.771485 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.774495 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.794709 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.798556 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.811608 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.814763 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.830668 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:35 crc kubenswrapper[5010]: E0203 10:02:35.830828 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.832604 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935335 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:35 crc kubenswrapper[5010]: I0203 10:02:35.935358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:35Z","lastTransitionTime":"2026-02-03T10:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.037859 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.096433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-89h2z"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.096845 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-s4xnz"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097030 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-f5tpq"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097065 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097297 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.097338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099119 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cvpds"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.099729 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100001 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100069 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100364 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100412 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100502 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100691 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.100746 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101002 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101204 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101682 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.101945 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102237 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102405 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.102889 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.117691 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.135880 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.139953 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140178 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.140491 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.149763 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.160788 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.178199 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.196674 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207164 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207232 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207258 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207284 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207304 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207420 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207528 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207607 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207648 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207674 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207693 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207730 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207775 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207808 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207906 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207957 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207934 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.207978 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208107 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208130 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208172 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208197 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208262 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208281 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208332 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.208364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.220302 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.231470 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.242777 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.243142 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.249505 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 09:57:35 +0000 UTC, rotation deadline is 2026-12-03 11:45:48.059894672 +0000 UTC Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.249559 5010 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7273h43m11.810338819s for next certificate rotation Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.254084 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.264124 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.274506 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.284034 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.307015 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309302 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309347 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309421 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-kubelet\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309489 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309492 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-multus\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309460 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-var-lib-cni-bin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309546 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309750 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-hostroot\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309784 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.309828 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310006 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-mcd-auth-proxy-config\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-multus-certs\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310069 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310100 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310110 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cab56d94-9407-4305-9e87-55e378a0878f-hosts-file\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310115 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310142 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-conf-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310190 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cni-binary-copy\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310275 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310295 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-daemon-config\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cnibin\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310304 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310343 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-k8s-cni-cncf-io\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310355 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-socket-dir-parent\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310381 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-cnibin\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310397 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310325 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-rootfs\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310426 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-system-cni-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310433 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310463 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-system-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310479 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-os-release\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310402 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-os-release\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310507 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-host-run-netns\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310567 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310599 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-multus-cni-dir\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-etc-kubernetes\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.310937 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-binary-copy\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.311399 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5c4274d-0165-4762-850f-b2a2ceb57c0b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.311635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5c4274d-0165-4762-850f-b2a2ceb57c0b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.316945 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-proxy-tls\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.331920 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmvm\" (UniqueName: \"kubernetes.io/projected/d5c4274d-0165-4762-850f-b2a2ceb57c0b-kube-api-access-nmmvm\") pod \"multus-additional-cni-plugins-cvpds\" (UID: \"d5c4274d-0165-4762-850f-b2a2ceb57c0b\") " pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.334533 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.335837 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclqv\" (UniqueName: \"kubernetes.io/projected/e607e2ef-d3d6-4db0-b514-0d5321d9d28d-kube-api-access-mclqv\") pod \"machine-config-daemon-s4xnz\" (UID: \"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\") " pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.339873 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57xn\" (UniqueName: \"kubernetes.io/projected/8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef-kube-api-access-f57xn\") pod \"multus-f5tpq\" (UID: \"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\") " pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.345825 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.350203 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l8d2\" (UniqueName: \"kubernetes.io/projected/cab56d94-9407-4305-9e87-55e378a0878f-kube-api-access-6l8d2\") pod \"node-resolver-89h2z\" (UID: \"cab56d94-9407-4305-9e87-55e378a0878f\") " pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.351010 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.363434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.375643 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.384013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.395681 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.410842 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-89h2z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.420029 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f5tpq" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.424188 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcab56d94_9407_4305_9e87_55e378a0878f.slice/crio-e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c WatchSource:0}: Error finding container e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c: Status 404 returned error can't find the container with id e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.429462 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.433813 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b16bcfb_db8c_4fbe_98f3_2d6c5353cfef.slice/crio-b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de WatchSource:0}: Error finding container b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de: Status 404 returned error can't find the container with id b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.437833 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cvpds" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.446181 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode607e2ef_d3d6_4db0_b514_0d5321d9d28d.slice/crio-b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a WatchSource:0}: Error finding container b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a: Status 404 returned error can't find the container with id b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.447380 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.452236 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:12:07.360886233 +0000 UTC Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.475420 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.476447 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478330 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478349 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.478530 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.479008 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.479086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.481289 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.481206 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.500635 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.502200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:36 crc kubenswrapper[5010]: E0203 10:02:36.502340 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.511683 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512108 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512142 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512159 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.512180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.513621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515425 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515444 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515470 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515510 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515525 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515598 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515633 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515657 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515685 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.515727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.520590 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.533103 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.543656 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.552319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.556316 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.568617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.579830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.594193 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.604252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616883 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616934 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.616999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617021 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617044 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617189 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617239 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617256 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617311 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617326 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617342 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.617362 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618238 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.618984 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619011 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619204 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619294 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619290 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619371 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619496 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.619803 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620029 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.620045 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.623162 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.625552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.635019 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"ovnkube-node-68p7p\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.637624 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.648311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-89h2z" event={"ID":"cab56d94-9407-4305-9e87-55e378a0878f","Type":"ContainerStarted","Data":"e19e7361e8845bd89910cf96bc0493054812d1a72d9f87b02465696b42a4be0c"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.650539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"b66aac4a67055d24ac3f5a7b433b8c06a459a551298364f2b91d5e5e6ab6845a"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.651286 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.653838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"b7de1ec682521ef69328307beddc09d19a5c9f3f8c16189a78b2019cf09f91de"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.653939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654047 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654065 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.654979 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerStarted","Data":"0e1fef134ebf63c229dae47579579581b3e3f1fa051f07556569567c2de2d944"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756924 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.756997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.757011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.860310 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.901015 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:36 crc kubenswrapper[5010]: W0203 10:02:36.914440 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbb630a_0dee_4c9c_90ff_cb710b9da3f2.slice/crio-397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3 WatchSource:0}: Error finding container 397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3: Status 404 returned error can't find the container with id 397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3 Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:36 crc kubenswrapper[5010]: I0203 10:02:36.963332 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:36Z","lastTransitionTime":"2026-02-03T10:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.065993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.066006 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127626 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127721 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127773 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.127749562 +0000 UTC m=+35.283725701 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127837 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.127857 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.127880 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.127870355 +0000 UTC m=+35.283846574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.128008 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.128126 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.1280736 +0000 UTC m=+35.284049739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.168762 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.229432 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.229470 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229605 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229622 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229633 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229672 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.229659759 +0000 UTC m=+35.385635888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229949 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229974 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.229986 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.230028 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:45.230016268 +0000 UTC m=+35.385992397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.270993 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373329 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.373358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.453253 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:11:51.948948132 +0000 UTC Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.475297 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.501739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.501806 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.501872 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:37 crc kubenswrapper[5010]: E0203 10:02:37.501991 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577241 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.577285 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.662092 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6" exitCode=0 Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.662143 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.664148 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-89h2z" event={"ID":"cab56d94-9407-4305-9e87-55e378a0878f","Type":"ContainerStarted","Data":"a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665676 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" exitCode=0 Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665738 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.665766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.668437 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.668474 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.669672 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.679918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.681825 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.695126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.707197 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.720122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.731337 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.741459 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.755149 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.768355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.780384 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.782319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.792365 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.805235 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.816340 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.835295 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.851313 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.863818 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.875530 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885207 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.885266 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.888956 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.909148 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.921881 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.935043 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.951085 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.962754 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.974527 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.987921 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:37 crc kubenswrapper[5010]: I0203 10:02:37.988670 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:37Z","lastTransitionTime":"2026-02-03T10:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.004730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.019740 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.091674 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.193656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.295825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.296178 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.397999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.398011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.453928 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:59:47.497532664 +0000 UTC Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.500562 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.501135 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:38 crc kubenswrapper[5010]: E0203 10:02:38.501247 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.603259 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676299 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676356 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676368 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.676376 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.678951 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb" exitCode=0 Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.679105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.696826 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706467 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.706503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.712112 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.726250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.740158 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.752375 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.764480 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.778967 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.798120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812374 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.812642 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.825643 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.844272 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.859126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.871617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:38Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914806 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:38 crc kubenswrapper[5010]: I0203 10:02:38.914817 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:38Z","lastTransitionTime":"2026-02-03T10:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.016897 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.021030 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035201 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7lfkq"] Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035363 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.035736 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037064 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037148 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.037681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.039984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.050789 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.059620 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.069607 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.079230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.089378 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.100128 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.119861 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.121392 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.132669 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.145286 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149459 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149505 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.149531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.156355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.168790 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.179759 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.192748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.205610 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.217202 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.221926 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.231402 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.244573 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.250911 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.250957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251035 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251424 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a594fab0-c299-4489-be04-95a81c6dd272-host\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.251889 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a594fab0-c299-4489-be04-95a81c6dd272-serviceca\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.258444 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.269791 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.269865 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llslg\" (UniqueName: \"kubernetes.io/projected/a594fab0-c299-4489-be04-95a81c6dd272-kube-api-access-llslg\") pod \"node-ca-7lfkq\" (UID: \"a594fab0-c299-4489-be04-95a81c6dd272\") " pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.281036 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.291253 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.301547 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.318830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.324525 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.331158 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.343765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.351363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7lfkq" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.356551 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.427759 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.455013 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:50:26.429756338 +0000 UTC Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.501242 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.501351 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:39 crc kubenswrapper[5010]: E0203 10:02:39.501426 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:39 crc kubenswrapper[5010]: E0203 10:02:39.502249 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.533349 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.635991 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.636070 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.685327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.686492 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7lfkq" event={"ID":"a594fab0-c299-4489-be04-95a81c6dd272","Type":"ContainerStarted","Data":"5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.686538 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7lfkq" event={"ID":"a594fab0-c299-4489-be04-95a81c6dd272","Type":"ContainerStarted","Data":"4209a2a84405f3e5ebd4b7fefddd1dd9531d4d650846b426212c9042285e2146"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.688887 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa" exitCode=0 Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.688916 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.699230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.709374 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.723463 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739520 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739560 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739577 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.739829 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.752872 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.763403 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.784733 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.793495 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.807942 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.820937 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.832913 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842240 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.842305 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.844710 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.858759 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.874171 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.890306 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.902854 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.914159 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.924200 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.940114 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944747 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.944780 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:39Z","lastTransitionTime":"2026-02-03T10:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:39 crc kubenswrapper[5010]: I0203 10:02:39.976932 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.000327 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:39Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.015454 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.026720 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.040896 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.046467 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.052133 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.069631 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.082506 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.095140 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148880 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.148909 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.251937 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.291587 5010 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.353937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.354436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.455453 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:53:35.926305757 +0000 UTC Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457542 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.457587 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.501376 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:40 crc kubenswrapper[5010]: E0203 10:02:40.501558 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.514946 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.526916 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.535922 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.549122 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559404 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.559413 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.561944 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.576032 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.592406 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.605355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.621794 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.631957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.648499 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.657716 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661324 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.661336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.668079 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.679694 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.694242 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e" exitCode=0 Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.694311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.710062 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.722162 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.733698 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.742592 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.758458 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.764969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.765096 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.770739 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.787764 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.798364 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.810896 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.825946 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.866468 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.866996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.867067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.907938 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.949097 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969767 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.969845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:40Z","lastTransitionTime":"2026-02-03T10:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:40 crc kubenswrapper[5010]: I0203 10:02:40.989577 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.071912 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.174962 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277403 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.277416 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379710 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.379746 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.456238 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:55:02.155463336 +0000 UTC Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.482494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.502059 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:41 crc kubenswrapper[5010]: E0203 10:02:41.502241 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.502067 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:41 crc kubenswrapper[5010]: E0203 10:02:41.502438 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585433 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.585444 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688783 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.688845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.703401 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.706862 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9" exitCode=0 Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.706904 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.721411 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.739047 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.755981 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.771350 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.790247 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.790972 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791001 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.791034 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.800709 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.812800 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.824565 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.836898 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.849391 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.862478 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.877586 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.887710 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893182 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893192 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.893251 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.896410 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:41 crc kubenswrapper[5010]: I0203 10:02:41.995356 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:41Z","lastTransitionTime":"2026-02-03T10:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098375 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.098421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.200952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.200992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201005 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.201031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.304178 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406709 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.406753 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.456440 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:36:23.535613324 +0000 UTC Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.501975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:42 crc kubenswrapper[5010]: E0203 10:02:42.502099 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.508336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.611131 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.712990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.713005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.715671 5010 generic.go:334] "Generic (PLEG): container finished" podID="d5c4274d-0165-4762-850f-b2a2ceb57c0b" containerID="da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d" exitCode=0 Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.715708 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerDied","Data":"da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.733855 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.745455 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.762478 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.784539 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.795990 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.809671 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.814754 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.828727 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.839588 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.850880 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.859726 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.870848 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.880375 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.889986 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.901005 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:42Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917041 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:42 crc kubenswrapper[5010]: I0203 10:02:42.917079 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:42Z","lastTransitionTime":"2026-02-03T10:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.018930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.121164 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.223481 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.325768 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.427527 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.457048 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:04:16.66007969 +0000 UTC Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.501671 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.501671 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:43 crc kubenswrapper[5010]: E0203 10:02:43.501839 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:43 crc kubenswrapper[5010]: E0203 10:02:43.501949 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529425 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.529466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635148 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635156 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635169 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.635177 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.723001 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.723614 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.727589 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" event={"ID":"d5c4274d-0165-4762-850f-b2a2ceb57c0b","Type":"ContainerStarted","Data":"1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738232 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738488 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738597 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.738765 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.749418 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.751270 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.762505 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.774928 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.786000 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.803054 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.814957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.827182 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.839427 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841131 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841175 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.841476 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.851755 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.865923 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.877579 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.885690 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.898173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.911849 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.922399 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.933289 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.943990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944055 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:43Z","lastTransitionTime":"2026-02-03T10:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.944949 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.956275 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.965927 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.980398 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:43 crc kubenswrapper[5010]: I0203 10:02:43.989551 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:43Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.001887 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.011176 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.020806 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.032196 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.042400 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.046318 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.059116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149063 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.149181 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.253913 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356036 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356094 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.356119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.457397 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:55:06.361621089 +0000 UTC Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459544 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.459655 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.502200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:44 crc kubenswrapper[5010]: E0203 10:02:44.502446 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.562881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.562970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.563085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.665962 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666017 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.666044 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.731961 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.732742 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.761197 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768330 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.768389 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.772193 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.785250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.796485 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.810672 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.823802 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.835310 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.855627 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871132 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871178 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.871238 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.875748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.886975 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.902600 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.915742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.932436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.944853 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.961019 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973985 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.973995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.974011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:44 crc kubenswrapper[5010]: I0203 10:02:44.974020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:44Z","lastTransitionTime":"2026-02-03T10:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076923 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.076964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183816 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183886 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.183896 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210179 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210355 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210320596 +0000 UTC m=+51.366296725 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210423 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.210448 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210541 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210599 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210583612 +0000 UTC m=+51.366559801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210657 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.210756 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.210735966 +0000 UTC m=+51.366712095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286727 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286760 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.286772 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.311717 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.311779 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311947 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311986 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311998 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.311994 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312024 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312039 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312064 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.312045429 +0000 UTC m=+51.468021558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.312348 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:01.312320685 +0000 UTC m=+51.468296884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389336 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.389358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.458165 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:52:28.738917298 +0000 UTC Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.491230 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.501550 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.501647 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.501732 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.501913 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.595961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.596075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.698722 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.735287 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801932 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.801941 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.904503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905419 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.905451 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.916820 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919522 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.919543 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.935721 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942365 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.942377 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.956878 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961786 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.961856 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.981743 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985246 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:45 crc kubenswrapper[5010]: I0203 10:02:45.985289 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:45Z","lastTransitionTime":"2026-02-03T10:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.997887 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:45 crc kubenswrapper[5010]: E0203 10:02:45.998059 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.006090 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108363 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.108373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.211453 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.314548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.417642 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.493253 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:15:08.316774449 +0000 UTC Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.501842 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:46 crc kubenswrapper[5010]: E0203 10:02:46.502061 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.519921 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.622233 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724968 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.724993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.725005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.745329 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.749888 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" exitCode=1 Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.749945 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.751020 5010 scope.go:117] "RemoveContainer" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.765357 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.782898 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.800823 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.816911 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827075 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827116 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.827132 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.832248 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.844893 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.859323 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.876795 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.895021 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.909878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.923827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929424 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.929459 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:46Z","lastTransitionTime":"2026-02-03T10:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.937016 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.951840 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:46 crc kubenswrapper[5010]: I0203 10:02:46.962571 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031705 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.031845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.133970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134025 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.134081 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236236 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.236245 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339096 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.339179 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442299 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.442336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.493611 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:26:15.850823457 +0000 UTC Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.501854 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.501967 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:47 crc kubenswrapper[5010]: E0203 10:02:47.502019 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:47 crc kubenswrapper[5010]: E0203 10:02:47.502154 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.546757 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.649994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.650122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.752486 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.756046 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.760199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.760389 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.777693 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.791972 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.808434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.818632 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.848862 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854717 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854746 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.854758 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.866680 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.882346 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.897088 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.910101 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.922429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.935657 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.948563 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957576 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957601 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.957611 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:47Z","lastTransitionTime":"2026-02-03T10:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.960116 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:47 crc kubenswrapper[5010]: I0203 10:02:47.969503 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060446 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.060462 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163682 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163740 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163785 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.163806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.267971 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.370942 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371271 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.371436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474433 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.474466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.493908 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:03:11.616968758 +0000 UTC Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.501377 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:48 crc kubenswrapper[5010]: E0203 10:02:48.503092 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577354 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.577480 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.667677 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl"] Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.668619 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.670814 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.670852 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.680900 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.689712 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.702451 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.713028 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714376 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714451 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.714492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.723267 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.733799 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.748015 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.760123 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.764576 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.765235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/0.log" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767419 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" exitCode=1 Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.767507 5010 scope.go:117] "RemoveContainer" containerID="6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.768382 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:48 crc kubenswrapper[5010]: E0203 10:02:48.769175 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.773589 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783305 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.783336 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.787229 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.798560 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815132 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815301 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815361 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815416 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.815452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.817808 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.818601 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.824954 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bde7a589-c2e8-48b2-aa06-2fb99731df31-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.829699 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.838764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhp4\" (UniqueName: \"kubernetes.io/projected/bde7a589-c2e8-48b2-aa06-2fb99731df31-kube-api-access-8fhp4\") pod \"ovnkube-control-plane-749d76644c-4vzdl\" (UID: \"bde7a589-c2e8-48b2-aa06-2fb99731df31\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.843481 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.853352 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.867160 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.883932 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885930 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885953 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.885964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.897862 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.912772 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.927166 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.939171 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.953986 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.966316 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.981967 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.985472 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.987994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.988020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:48Z","lastTransitionTime":"2026-02-03T10:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:48 crc kubenswrapper[5010]: I0203 10:02:48.997543 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:48Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: W0203 10:02:49.001602 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbde7a589_c2e8_48b2_aa06_2fb99731df31.slice/crio-ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac WatchSource:0}: Error finding container ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac: Status 404 returned error can't find the container with id ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.016497 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.031422 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.044629 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.059519 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.078125 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090506 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.090539 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.092483 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192806 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.192818 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.295434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.397615 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.494075 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:04:47.817406044 +0000 UTC Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.499739 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.501041 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.501144 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.501231 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.501390 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.602683 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705510 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.705522 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.764632 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.765445 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.765747 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773156 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.773275 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" event={"ID":"bde7a589-c2e8-48b2-aa06-2fb99731df31","Type":"ContainerStarted","Data":"ccf5d4d7077896db33e5d4cd50a872d9d21364abf54be63cf0c164bb1dc909ac"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.775460 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.778994 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.779166 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.789306 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.802649 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807921 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.807979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.808002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.808022 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.819539 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.827633 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.827802 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.832111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.844471 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.861864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.872945 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.888361 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.903055 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910965 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.910996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.911008 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:49Z","lastTransitionTime":"2026-02-03T10:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.922996 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d243aa4c763078b20143449f86b52307575d6c2cf775118fb6e82132a3e8658\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:46Z\\\",\\\"message\\\":\\\"534 6343 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 10:02:45.979597 6343 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 10:02:45.979604 6343 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 10:02:45.979608 6343 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979632 6343 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 10:02:45.979634 6343 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 10:02:45.979638 6343 factory.go:656] Stopping watch factory\\\\nI0203 10:02:45.979653 6343 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 10:02:45.979655 6343 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 10:02:45.979673 6343 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 10:02:45.979691 6343 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 10:02:45.979842 6343 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.929909 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.930017 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.930521 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:49 crc kubenswrapper[5010]: E0203 10:02:49.930630 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:50.430607047 +0000 UTC m=+40.586583176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.936064 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.947906 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.954307 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrj5\" (UniqueName: \"kubernetes.io/projected/081d0234-b506-49ff-81c9-c535f6e1c588-kube-api-access-6rrj5\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.963034 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.975328 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.987260 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:49 crc kubenswrapper[5010]: I0203 10:02:49.996285 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:49Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.007187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.013740 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.017068 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.029143 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.042046 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.056125 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.073010 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.084541 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.094126 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.104716 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.115972 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.116052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.118314 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.131273 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.142267 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.159304 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.169940 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.182208 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.193608 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218582 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218638 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.218661 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.321437 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423686 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.423698 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.434126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.434299 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.434373 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:51.434352458 +0000 UTC m=+41.590328597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.494779 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:39:18.786681866 +0000 UTC Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.501309 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:50 crc kubenswrapper[5010]: E0203 10:02:50.501483 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.520525 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.526201 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.539553 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.554528 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.563916 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.575736 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.587850 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.605760 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.616387 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628098 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628165 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.628431 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.638439 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.648120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.658544 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.667527 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.678496 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.687641 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.700653 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730508 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.730556 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.832892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935358 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:50 crc kubenswrapper[5010]: I0203 10:02:50.935443 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:50Z","lastTransitionTime":"2026-02-03T10:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037620 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.037655 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140082 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140129 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.140140 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.242472 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.344409 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.444281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.444507 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.444628 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:53.444599159 +0000 UTC m=+43.600575328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446745 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.446834 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.495744 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:53:50.514232136 +0000 UTC Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502138 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502265 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.502280 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502370 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502479 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:51 crc kubenswrapper[5010]: E0203 10:02:51.502567 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550520 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.550582 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.652970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.653133 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756862 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.756883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859705 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:51 crc kubenswrapper[5010]: I0203 10:02:51.859720 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962899 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:51.962930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:51Z","lastTransitionTime":"2026-02-03T10:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066271 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.066299 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.168929 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.169059 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271511 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.271540 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.373895 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.476329 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.496377 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:34:57.913122011 +0000 UTC Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.501673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:52 crc kubenswrapper[5010]: E0203 10:02:52.501831 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.579541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.682421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.784977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.785000 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.887943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.888074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.888240 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:52 crc kubenswrapper[5010]: I0203 10:02:52.990957 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:52Z","lastTransitionTime":"2026-02-03T10:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093900 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.093908 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.196885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197343 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.197855 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.300885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.301548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.403998 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.404011 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.470549 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.470681 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.470751 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:02:57.470730514 +0000 UTC m=+47.626706643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.496506 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:17:10.872075212 +0000 UTC Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501439 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501589 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501465 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.501446 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:53 crc kubenswrapper[5010]: E0203 10:02:53.501777 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506623 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.506706 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.609643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712339 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.712409 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.814946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.814996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815027 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.815038 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:53 crc kubenswrapper[5010]: I0203 10:02:53.918869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:53Z","lastTransitionTime":"2026-02-03T10:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022608 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.022634 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125564 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125633 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.125695 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228398 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228521 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.228538 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.331997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332060 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.332089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434592 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.434635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.497035 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:14:58.570133307 +0000 UTC Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.501585 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:54 crc kubenswrapper[5010]: E0203 10:02:54.501763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.536899 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.639772 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741931 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.741993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.742008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.742020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844444 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.844453 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:54 crc kubenswrapper[5010]: I0203 10:02:54.947697 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:54Z","lastTransitionTime":"2026-02-03T10:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.042146 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.043653 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.044016 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051102 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.051160 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.154355 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257125 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257237 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.257254 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.359748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.360043 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.462996 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.497397 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:11:13.933212854 +0000 UTC Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501634 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.501686 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.501581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.501762 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:55 crc kubenswrapper[5010]: E0203 10:02:55.502081 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.565948 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.667762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.668272 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771027 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.771094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.873964 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.874046 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976726 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:55 crc kubenswrapper[5010]: I0203 10:02:55.976739 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:55Z","lastTransitionTime":"2026-02-03T10:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.079526 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.181833 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284139 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.284242 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352441 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.352542 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.369329 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373427 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.373513 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.387238 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390862 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390897 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.390921 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.404263 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408444 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.408475 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.421556 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.424950 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.438809 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:56Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.438967 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.440565 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.497948 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:30:53.718979174 +0000 UTC Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.501431 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:56 crc kubenswrapper[5010]: E0203 10:02:56.501581 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543182 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.543196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.644966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.645058 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.747153 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.849803 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:56 crc kubenswrapper[5010]: I0203 10:02:56.952812 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:56Z","lastTransitionTime":"2026-02-03T10:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055190 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.055257 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.157530 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.259947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.260117 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362567 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.362577 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465714 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.465725 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.498116 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 18:36:19.853027394 +0000 UTC Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.501953 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.501755 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.501588 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.502068 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.517444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.517624 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:57 crc kubenswrapper[5010]: E0203 10:02:57.517700 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:05.517679629 +0000 UTC m=+55.673655768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567563 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.567637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.669869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773147 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.773187 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875250 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.875281 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978287 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:57 crc kubenswrapper[5010]: I0203 10:02:57.978360 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:57Z","lastTransitionTime":"2026-02-03T10:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081497 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.081548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.185119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288266 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.288977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.289188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.289438 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392891 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.392901 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.495864 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.495952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.496112 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.499172 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:38:11.686770027 +0000 UTC Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.501519 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:02:58 crc kubenswrapper[5010]: E0203 10:02:58.501655 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.598492 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.701964 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805988 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.805997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.806011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.806020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:58 crc kubenswrapper[5010]: I0203 10:02:58.907836 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:58Z","lastTransitionTime":"2026-02-03T10:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.010400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.113506 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.215994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216064 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.216625 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319547 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.319626 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.321979 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.331399 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.339250 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.352073 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.362876 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.374899 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.388728 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.403507 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.421851 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422203 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422435 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.422827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.432167 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.446137 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.457748 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.470380 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.482598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.492252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.499596 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:20:39.86337079 +0000 UTC Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502138 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502188 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502769 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.502426 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502779 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:02:59 crc kubenswrapper[5010]: E0203 10:02:59.502361 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.516864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.525437 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.528782 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:59Z is after 2025-08-24T17:21:41Z" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.627975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.628135 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.731954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.732094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835306 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835413 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.835429 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.952910 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.952980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953031 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:02:59 crc kubenswrapper[5010]: I0203 10:02:59.953058 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:02:59Z","lastTransitionTime":"2026-02-03T10:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055592 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055689 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.055779 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.158137 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261415 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.261434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364291 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364343 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364382 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.364397 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.467567 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.499843 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:51:20.099977059 +0000 UTC Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.501268 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:00 crc kubenswrapper[5010]: E0203 10:03:00.501441 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.514739 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.528884 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.545257 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.558827 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570553 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.570568 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.573198 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.583517 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.597675 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.613984 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.626201 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.637730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.650424 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.661772 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.673541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.680163 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.691742 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.702911 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.717987 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.733247 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:00Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775871 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775925 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775942 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.775951 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.878788 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:00 crc kubenswrapper[5010]: I0203 10:03:00.982201 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:00Z","lastTransitionTime":"2026-02-03T10:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.083995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.084010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.084020 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186279 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.186372 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266466 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.266626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266689 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.26667354 +0000 UTC m=+83.422649669 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266753 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.266834 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.266811363 +0000 UTC m=+83.422787542 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.267092 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.267150 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.267138942 +0000 UTC m=+83.423115121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.288970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289005 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289026 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.289036 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.367141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.367490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367426 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367752 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367838 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.367600 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368015 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368061 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368191 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.368038624 +0000 UTC m=+83.524014753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.368333 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:33.368319881 +0000 UTC m=+83.524296200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392086 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392181 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.392192 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494487 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494522 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494550 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.494561 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.500035 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:14:42.979440186 +0000 UTC Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501404 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.501422 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501584 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:01 crc kubenswrapper[5010]: E0203 10:03:01.501654 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597233 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.597269 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.699661 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.801523 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:01 crc kubenswrapper[5010]: I0203 10:03:01.904396 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:01Z","lastTransitionTime":"2026-02-03T10:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.012754 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.116490 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.219432 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322539 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.322548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.425492 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.500875 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:35:42.910893277 +0000 UTC Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.501427 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:02 crc kubenswrapper[5010]: E0203 10:03:02.501559 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.527366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.629696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.629943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.630205 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.733349 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.836366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.938722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939082 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939345 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:02 crc kubenswrapper[5010]: I0203 10:03:02.939496 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:02Z","lastTransitionTime":"2026-02-03T10:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044095 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.044167 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146475 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146513 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.146535 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.248567 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.351924 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454616 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.454627 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501394 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:45:52.144796794 +0000 UTC Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501633 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501720 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.501804 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.501847 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.502000 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:03 crc kubenswrapper[5010]: E0203 10:03:03.502089 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556974 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.556983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659388 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.659463 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762072 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.762103 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.864714 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968180 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968209 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:03 crc kubenswrapper[5010]: I0203 10:03:03.968300 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:03Z","lastTransitionTime":"2026-02-03T10:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.070827 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.172958 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.173040 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.275868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.276186 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378792 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.378802 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481351 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.481414 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.501146 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:04 crc kubenswrapper[5010]: E0203 10:03:04.501281 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.501729 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:31:04.641673913 +0000 UTC Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584710 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.584744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688230 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.688323 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791296 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791367 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.791421 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893686 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.893719 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:04 crc kubenswrapper[5010]: I0203 10:03:04.996670 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:04Z","lastTransitionTime":"2026-02-03T10:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099403 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099533 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.099554 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.201662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304102 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.304149 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407373 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.407417 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501622 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.501864 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501670 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.501648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.501976 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.502046 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:09:58.216972682 +0000 UTC Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.502268 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509736 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509750 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.509781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612489 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.612670 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612688 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: E0203 10:03:05.612722 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:21.612705969 +0000 UTC m=+71.768682108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.612744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.715990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.716084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.818651 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:05 crc kubenswrapper[5010]: I0203 10:03:05.920550 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:05Z","lastTransitionTime":"2026-02-03T10:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.022667 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124832 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.124854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.230849 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.333180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.435974 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436053 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.436126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.454758 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.468814 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.474100 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.490475 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.495790 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.501418 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.501565 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.503168 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:14:18.905684808 +0000 UTC Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.512028 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516177 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.516231 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.530751 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534190 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534260 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.534292 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.545439 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:06Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:06 crc kubenswrapper[5010]: E0203 10:03:06.545616 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551888 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.551980 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.654781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.757990 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.860637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:06 crc kubenswrapper[5010]: I0203 10:03:06.963300 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:06Z","lastTransitionTime":"2026-02-03T10:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.065707 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168672 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168681 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.168709 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271195 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271211 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.271267 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.373753 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475979 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.475995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.476015 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.476029 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501128 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501169 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501310 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501386 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501653 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:07 crc kubenswrapper[5010]: E0203 10:03:07.501907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.501934 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.504021 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:48:01.012441195 +0000 UTC Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578628 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.578637 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681161 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.681182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783763 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.783790 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.833525 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.835732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.836637 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.852541 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.864667 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.886456 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.902634 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.922083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.937864 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.950256 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.963255 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.974066 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989642 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989676 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.989691 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:07Z","lastTransitionTime":"2026-02-03T10:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:07 crc kubenswrapper[5010]: I0203 10:03:07.992523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:07Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.005998 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.016261 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.028244 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.043730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.060734 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.076788 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092135 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092866 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.092904 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.108669 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.195175 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298234 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298383 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.298400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.400996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.401005 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.501687 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:08 crc kubenswrapper[5010]: E0203 10:03:08.501818 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.503157 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.504299 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:18:44.162028586 +0000 UTC Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606358 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606460 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.606470 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.709889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.709966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.710062 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813043 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813105 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.813137 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.842541 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.843152 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/1.log" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846520 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" exitCode=1 Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846569 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.846649 5010 scope.go:117] "RemoveContainer" containerID="795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.848034 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:08 crc kubenswrapper[5010]: E0203 10:03:08.848379 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.870928 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.890042 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.907647 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915380 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.915394 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:08Z","lastTransitionTime":"2026-02-03T10:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.921976 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.937523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.950534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.966429 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:08 crc kubenswrapper[5010]: I0203 10:03:08.981291 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.001093 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://795aee367bf11026254af0f0a98972df16f6a531651d9435973cd00b247c0b9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:02:47Z\\\",\\\"message\\\":\\\"te:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:02:47.545802 6468 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-cvpds\\\\nF0203 10:02:47.545810 6468 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:02:47Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:02:47.545805 6468 obj_retry.go:365] Adding new object: *v1.Pod openshi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.012016 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017657 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017671 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.017680 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.027620 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.040857 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.058982 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.071832 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.084589 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.098012 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134529 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.134557 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.137336 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238332 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.238426 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.341725 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.444563 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501971 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501992 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502122 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502256 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.501995 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.502362 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.505124 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:44:05.030942135 +0000 UTC Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.546967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.547063 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649420 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649437 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.649450 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.752439 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.851707 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854044 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.854079 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.856365 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:09 crc kubenswrapper[5010]: E0203 10:03:09.856562 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.872339 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.885704 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.905161 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.918663 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.937013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.950925 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956191 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.956279 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:09Z","lastTransitionTime":"2026-02-03T10:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.967428 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.984878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:09 crc kubenswrapper[5010]: I0203 10:03:09.996756 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:09Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.008917 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.018436 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.030298 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.040945 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.052484 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058900 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058950 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.058983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.063549 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.076022 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.086617 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.162484 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.264867 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.367504 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469248 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.469271 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.501969 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:10 crc kubenswrapper[5010]: E0203 10:03:10.502113 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.505770 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:52:17.561889668 +0000 UTC Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.515285 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.534054 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.545678 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.556811 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.568305 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572603 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.572612 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.582355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.598854 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.612765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.628735 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.640891 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.655764 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.671091 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.675346 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.685194 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.701422 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.713715 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.733039 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.747642 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:10Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.778350 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880598 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.880709 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983351 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983363 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983379 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:10 crc kubenswrapper[5010]: I0203 10:03:10.983393 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:10Z","lastTransitionTime":"2026-02-03T10:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.085682 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.188922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.189029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.189113 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.291847 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.394714 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.394975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.395279 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.498434 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501828 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.501837 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502167 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502282 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:11 crc kubenswrapper[5010]: E0203 10:03:11.502185 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.506292 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:21:54.860528932 +0000 UTC Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.600960 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.601069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.601235 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703969 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.703982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.704002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.704014 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.806595 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.908680 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909292 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:11 crc kubenswrapper[5010]: I0203 10:03:11.909304 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:11Z","lastTransitionTime":"2026-02-03T10:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.011849 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114829 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114875 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.114930 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.217854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.319949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.320072 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423400 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.423485 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.501706 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:12 crc kubenswrapper[5010]: E0203 10:03:12.501886 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.506485 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:19:28.94161487 +0000 UTC Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526656 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.526745 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630001 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630725 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.630845 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.733668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.733984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.734284 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.836918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.938973 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:12 crc kubenswrapper[5010]: I0203 10:03:12.939037 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:12Z","lastTransitionTime":"2026-02-03T10:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.041397 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143913 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.143999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.144015 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246839 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246867 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.246898 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.349332 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451884 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451899 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.451910 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.501816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.501945 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.502103 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.502145 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.502276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:13 crc kubenswrapper[5010]: E0203 10:03:13.502330 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.507142 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:48:29.332259903 +0000 UTC Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554815 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.554823 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657224 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657259 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657270 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.657295 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759717 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.759767 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.861781 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964165 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964184 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:13 crc kubenswrapper[5010]: I0203 10:03:13.964196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:13Z","lastTransitionTime":"2026-02-03T10:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066954 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.066967 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168895 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.168929 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271395 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271411 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.271424 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.373795 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476589 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.476630 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.502387 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:14 crc kubenswrapper[5010]: E0203 10:03:14.502501 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.507713 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:19:50.93150374 +0000 UTC Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579657 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.579685 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.681553 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784242 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.784280 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.886122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:14 crc kubenswrapper[5010]: I0203 10:03:14.988498 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:14Z","lastTransitionTime":"2026-02-03T10:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091087 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.091098 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193276 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.193385 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295785 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.295816 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397879 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397896 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.397907 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502697 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.502612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502787 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502910 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:15 crc kubenswrapper[5010]: E0203 10:03:15.502994 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503398 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.503412 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.508087 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:10:28.370011358 +0000 UTC Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605539 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605548 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.605576 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711910 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711926 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711943 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.711953 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.814967 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.916994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:15 crc kubenswrapper[5010]: I0203 10:03:15.917003 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:15Z","lastTransitionTime":"2026-02-03T10:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.019822 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.020461 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.122731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123131 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.123154 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.225609 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327296 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327310 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.327322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.428799 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.501438 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.501562 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.509163 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:51:29.669211386 +0000 UTC Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530545 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.530559 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.631893 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.733980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.734052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.836936 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.837010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.837075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906313 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.906325 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.923998 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.928120 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.939245 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.942787 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.953946 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.956991 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.957062 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.967822 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970695 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970791 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.970805 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.982252 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:16Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:16 crc kubenswrapper[5010]: E0203 10:03:16.982442 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:16 crc kubenswrapper[5010]: I0203 10:03:16.983600 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:16Z","lastTransitionTime":"2026-02-03T10:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.085624 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188196 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188776 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.188851 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291231 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.291945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.292078 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394952 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394964 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.394990 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.497343 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501570 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501704 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.501577 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501831 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:17 crc kubenswrapper[5010]: E0203 10:03:17.501935 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.510029 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:43:29.374038856 +0000 UTC Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599394 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.599405 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.702085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805017 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.805098 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907293 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907376 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:17 crc kubenswrapper[5010]: I0203 10:03:17.907413 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:17Z","lastTransitionTime":"2026-02-03T10:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009957 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009970 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.009979 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.112308 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.213995 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214079 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.214118 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316486 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.316495 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419842 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.419869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.501885 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:18 crc kubenswrapper[5010]: E0203 10:03:18.502102 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.510440 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:32:08.730317674 +0000 UTC Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.522153 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625232 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625291 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.625319 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.727581 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830557 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.830598 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932506 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:18 crc kubenswrapper[5010]: I0203 10:03:18.932536 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:18Z","lastTransitionTime":"2026-02-03T10:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035324 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035332 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.035357 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137397 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.137497 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239141 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.239150 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.341149 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443101 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.443124 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501884 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502027 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502092 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.501893 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:19 crc kubenswrapper[5010]: E0203 10:03:19.502182 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.511265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:21:53.989857692 +0000 UTC Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545006 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545046 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545057 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.545082 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647601 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647638 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.647674 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750186 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750362 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.750398 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.852472 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:19 crc kubenswrapper[5010]: I0203 10:03:19.954346 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:19Z","lastTransitionTime":"2026-02-03T10:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057399 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.057452 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159966 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.159982 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.160004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.160042 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263393 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263409 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.263419 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.366361 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468396 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.468468 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.501254 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:20 crc kubenswrapper[5010]: E0203 10:03:20.501389 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.511959 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:05:49.827660673 +0000 UTC Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.512447 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.522882 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.534583 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.546385 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.568461 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571344 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.571524 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.579998 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.594261 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.606475 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.620430 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.632123 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.644682 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.657871 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.668765 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673790 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673818 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.673830 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.679400 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.693230 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.708324 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.719467 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:20Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776206 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776285 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776300 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.776312 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878359 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.878391 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:20 crc kubenswrapper[5010]: I0203 10:03:20.980965 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:20Z","lastTransitionTime":"2026-02-03T10:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083609 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.083622 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.186613 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289521 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.289538 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.395881 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499510 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.499617 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.501366 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.501475 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.501947 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.502072 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.502070 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.502263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.511731 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.512519 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:37:14.427568288 +0000 UTC Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602318 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.602330 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.692165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.692332 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:21 crc kubenswrapper[5010]: E0203 10:03:21.692381 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:03:53.692368558 +0000 UTC m=+103.848344687 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704799 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.704841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.806793 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908958 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:21 crc kubenswrapper[5010]: I0203 10:03:21.908970 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:21Z","lastTransitionTime":"2026-02-03T10:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011285 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.011329 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.113798 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.216633 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.318656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420763 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420793 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.420806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.501921 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:22 crc kubenswrapper[5010]: E0203 10:03:22.502354 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.502748 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:22 crc kubenswrapper[5010]: E0203 10:03:22.502941 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.513173 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:07:05.176276559 +0000 UTC Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523146 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.523157 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.625881 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727902 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727934 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727944 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.727975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830813 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.830833 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933274 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:22 crc kubenswrapper[5010]: I0203 10:03:22.933287 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:22Z","lastTransitionTime":"2026-02-03T10:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036063 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.036075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138099 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.138108 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241158 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241207 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241255 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241273 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.241285 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344112 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.344123 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446740 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.446751 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501108 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501258 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501321 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.501483 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:23 crc kubenswrapper[5010]: E0203 10:03:23.501667 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.514018 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:27:59.531334178 +0000 UTC Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.549952 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652475 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.652505 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.755366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857987 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.857999 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.858014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.858028 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960567 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960623 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960636 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:23 crc kubenswrapper[5010]: I0203 10:03:23.960653 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:23Z","lastTransitionTime":"2026-02-03T10:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063080 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.063089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165635 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.165717 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.268355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.268994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.269204 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372304 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372368 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372385 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372409 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.372425 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.474990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.475182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.501589 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:24 crc kubenswrapper[5010]: E0203 10:03:24.501733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.514162 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:52:52.393131014 +0000 UTC Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577832 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.577905 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.680834 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783198 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783250 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783260 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.783286 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886414 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.886470 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913471 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913529 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" exitCode=1 Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.913916 5010 scope.go:117] "RemoveContainer" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.940133 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.953069 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.966918 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989153 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.989168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:24Z","lastTransitionTime":"2026-02-03T10:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:24 crc kubenswrapper[5010]: I0203 10:03:24.990153 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.005066 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.023004 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.035958 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.051508 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.067151 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.082993 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091490 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.091559 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.096001 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.110525 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.121653 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.136687 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.147792 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.167830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.180905 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194946 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194961 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.194969 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.196178 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297694 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.297704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399875 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399925 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.399935 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501317 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501410 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501561 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501609 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.501703 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:25 crc kubenswrapper[5010]: E0203 10:03:25.501741 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509380 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.509542 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.514882 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:33:53.040087362 +0000 UTC Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612621 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.612824 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715835 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715846 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.715876 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818206 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818239 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.818250 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.918103 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.918159 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.919383 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:25Z","lastTransitionTime":"2026-02-03T10:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.929120 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.939957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.951360 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.960435 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.976878 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.986187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:25 crc kubenswrapper[5010]: I0203 10:03:25.999083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:25Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.007763 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.020542 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021401 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021430 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.021465 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.031202 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.041041 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.052269 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.063134 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.073627 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.089048 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.105187 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.117404 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123497 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123509 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.123519 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.127745 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:26Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226446 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226474 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.226487 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328672 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328691 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.328733 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431163 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.431262 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.501379 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:26 crc kubenswrapper[5010]: E0203 10:03:26.501528 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.515247 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:32:26.786900285 +0000 UTC Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533842 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.533857 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636675 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.636690 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739877 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739927 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739939 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739957 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.739975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842494 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.842517 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944571 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:26 crc kubenswrapper[5010]: I0203 10:03:26.944595 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:26Z","lastTransitionTime":"2026-02-03T10:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047808 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.047852 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051595 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051630 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051641 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.051664 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.067885 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.072658 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.087189 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095516 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095572 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.095626 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.112452 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116680 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.116700 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.130359 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.134792 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.147397 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:27Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.147570 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.149986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150034 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150050 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.150063 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252646 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252662 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.252671 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.355168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.457883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501621 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501687 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.501765 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.501907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.501975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:27 crc kubenswrapper[5010]: E0203 10:03:27.502037 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.515419 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:47:12.64690788 +0000 UTC Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559955 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559971 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.559980 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.661987 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.662135 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.765369 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867533 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867552 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.867566 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970404 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:27 crc kubenswrapper[5010]: I0203 10:03:27.970473 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:27Z","lastTransitionTime":"2026-02-03T10:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073847 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.073929 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176550 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.176578 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.278975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279058 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.279092 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381627 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381662 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.381672 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483696 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.483872 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.502344 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:28 crc kubenswrapper[5010]: E0203 10:03:28.502936 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.516317 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:39:35.142523693 +0000 UTC Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586755 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.586808 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689604 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689613 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.689638 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.792915 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793096 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793116 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.793152 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895805 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895821 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.895832 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998373 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:28 crc kubenswrapper[5010]: I0203 10:03:28.998511 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:28Z","lastTransitionTime":"2026-02-03T10:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102173 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102227 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102247 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.102264 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205244 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205387 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.205551 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.308918 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309028 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.309084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.411565 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501433 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501435 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501601 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501674 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.501469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:29 crc kubenswrapper[5010]: E0203 10:03:29.501790 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514078 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.514119 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.517265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:47:36.586029313 +0000 UTC Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619565 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.619648 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.621090 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.621598 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724605 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724653 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.724706 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827507 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.827530 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930906 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930916 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:29 crc kubenswrapper[5010]: I0203 10:03:29.930950 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:29Z","lastTransitionTime":"2026-02-03T10:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.034948 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.034996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035009 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035029 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.035042 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138205 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138349 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.138396 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242502 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.242548 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345887 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345926 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.345946 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.448932 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.448996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449024 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449059 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.449083 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.501619 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:30 crc kubenswrapper[5010]: E0203 10:03:30.501745 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.515173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.517451 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:20:45.468978074 +0000 UTC Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.529494 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.544173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551262 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.551321 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.555670 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.566098 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.578140 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.591465 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.610786 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.631462 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.650531 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654561 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654577 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.654618 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.667173 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.684814 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.698676 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.726971 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.740036 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.750876 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756737 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756747 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.756771 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.768662 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.786676 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:30Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.858955 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961389 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961439 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:30 crc kubenswrapper[5010]: I0203 10:03:30.961466 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:30Z","lastTransitionTime":"2026-02-03T10:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064774 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064893 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.064911 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168100 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168258 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.168286 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271282 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.271366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374062 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.374085 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.476947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.477083 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501606 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.501707 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.501794 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.501900 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:31 crc kubenswrapper[5010]: E0203 10:03:31.502026 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.517882 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:43:14.773731812 +0000 UTC Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.580906 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.682914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.682988 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.683050 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.785839 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888742 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888772 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.888794 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:31 crc kubenswrapper[5010]: I0203 10:03:31.990377 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:31Z","lastTransitionTime":"2026-02-03T10:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094123 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094148 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094177 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.094198 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196722 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196773 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.196813 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.299575 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402371 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402391 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.402403 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.502173 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:32 crc kubenswrapper[5010]: E0203 10:03:32.502324 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504634 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504650 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.504662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.518986 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:08:11.547284054 +0000 UTC Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607364 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607412 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.607436 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709630 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.709696 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812402 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.812488 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916114 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916135 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:32 crc kubenswrapper[5010]: I0203 10:03:32.916182 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:32Z","lastTransitionTime":"2026-02-03T10:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018277 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018335 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.018362 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120649 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.120723 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223037 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.223144 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310303 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310283648 +0000 UTC m=+147.466259777 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310335 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.310365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310466 5010 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310486 5010 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310513 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310503085 +0000 UTC m=+147.466479214 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.310533 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.310520795 +0000 UTC m=+147.466496924 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325866 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.325888 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.411535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.411663 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411844 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411868 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411888 5010 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411914 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411965 5010 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411992 5010 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.411965 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.411943519 +0000 UTC m=+147.567919688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.412108 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:37.412076432 +0000 UTC m=+147.568052631 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429684 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429715 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.429740 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501327 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501367 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.501360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.501505 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.501814 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:33 crc kubenswrapper[5010]: E0203 10:03:33.502038 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.520063 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:01:46.824495813 +0000 UTC Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532777 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532817 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532829 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.532840 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635682 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635764 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635792 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.635815 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.737938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.737997 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.738045 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.840977 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.841078 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:33 crc kubenswrapper[5010]: I0203 10:03:33.943975 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:33Z","lastTransitionTime":"2026-02-03T10:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046595 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.046619 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150053 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150072 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.150089 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252863 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.252892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.355876 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458754 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.458839 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.506683 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.507189 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:34 crc kubenswrapper[5010]: E0203 10:03:34.507393 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.520661 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:05:15.019053234 +0000 UTC Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.560956 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.560992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561003 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561019 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.561031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663697 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663732 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663745 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.663755 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.765994 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.766010 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.766024 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871050 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.871174 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.950421 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.954381 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.954879 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973276 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:34Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973326 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:34 crc kubenswrapper[5010]: I0203 10:03:34.973479 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:34Z","lastTransitionTime":"2026-02-03T10:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.024740 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.034107 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.048303 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.059396 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.069110 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075728 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075769 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.075806 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.080817 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.090351 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.103552 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.116939 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.126730 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.142493 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.154398 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.165867 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.176500 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.178152 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.189452 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.199574 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.212554 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283425 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283464 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283488 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.283497 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386305 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386323 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.386364 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.488980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489023 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489032 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489047 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.489056 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501326 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501389 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.501403 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501458 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501528 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.501620 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.521771 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:56:35.059646578 +0000 UTC Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591295 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591319 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.591366 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693889 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.693910 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795872 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795908 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.795918 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897549 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.897560 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.958618 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.959332 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/2.log" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962749 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" exitCode=1 Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962806 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.962874 5010 scope.go:117] "RemoveContainer" containerID="2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.966124 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:35 crc kubenswrapper[5010]: E0203 10:03:35.966553 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.978523 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.991975 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:35Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999848 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999897 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:35 crc kubenswrapper[5010]: I0203 10:03:35.999912 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:35.999933 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:35.999949 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:35Z","lastTransitionTime":"2026-02-03T10:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.006545 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.022902 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.038167 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.053458 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d99eed11cc0765d799890c515f3f7144c9cda73093f589f455cdc354756c2f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:08Z\\\",\\\"message\\\":\\\" Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 10:03:08.319356 6739 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:08Z is after 2025-08-24T17:21:41Z]\\\\nI0203 10:03:08.319342 6739 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-webhook]} name:Service_openshift-machine\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.062339 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.072521 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.083391 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.092331 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101800 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.101883 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.102775 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.117697 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.131476 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.142334 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.154680 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.165598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.175119 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.193124 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203437 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203482 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203498 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.203510 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305838 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305869 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.305882 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.408469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409139 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409231 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.409249 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.501822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:36 crc kubenswrapper[5010]: E0203 10:03:36.501986 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511709 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.511768 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.522274 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:28:31.291849512 +0000 UTC Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.614996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615045 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615061 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615088 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.615122 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718329 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718395 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718416 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.718452 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821381 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.821458 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927452 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927787 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.927861 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:36Z","lastTransitionTime":"2026-02-03T10:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.968542 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.973510 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:36 crc kubenswrapper[5010]: E0203 10:03:36.973763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:36 crc kubenswrapper[5010]: I0203 10:03:36.992500 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:36Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.010625 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.023355 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031537 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031574 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.031611 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.038796 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.051341 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.064252 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.078685 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.096687 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.111408 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.132636 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133883 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133895 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.133923 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.146534 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.159724 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.172061 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.183386 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.196439 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.210585 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236350 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.236527 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.243313 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.257901 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.339734 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442704 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442739 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442751 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.442759 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501178 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501410 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.501544 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.501739 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502757 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.502940 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.520520 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.522469 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:27:53.983199686 +0000 UTC Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526378 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526640 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.526919 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.527098 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.527314 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.543852 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547631 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547699 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.547711 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.561874 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566176 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.566245 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.578633 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582643 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.582676 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.598338 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:37Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:37 crc kubenswrapper[5010]: E0203 10:03:37.598570 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603727 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603748 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.603756 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707744 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707807 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707825 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.707841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811064 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811081 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.811094 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914734 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914801 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914816 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914840 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:37 crc kubenswrapper[5010]: I0203 10:03:37.914856 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:37Z","lastTransitionTime":"2026-02-03T10:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017937 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017975 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.017984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.018002 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.018014 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120427 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120535 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120701 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.120821 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.223531 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.223858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.224344 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327810 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327892 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.327903 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430195 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430272 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430288 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430308 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.430322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.501762 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:38 crc kubenswrapper[5010]: E0203 10:03:38.501934 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.524096 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:54:05.381243303 +0000 UTC Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533554 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533645 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.533662 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637112 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637283 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637320 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.637358 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740455 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740495 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.740528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843499 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843543 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843556 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.843564 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946584 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.946947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.947039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:38 crc kubenswrapper[5010]: I0203 10:03:38.947126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:38Z","lastTransitionTime":"2026-02-03T10:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049933 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.049993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.050013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.050027 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152590 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152600 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152615 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.152625 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254941 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.254953 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357353 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.357427 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459716 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459730 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459749 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.459763 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.501904 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.502007 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502073 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502149 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.501908 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:39 crc kubenswrapper[5010]: E0203 10:03:39.502433 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.524778 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:14:09.337942731 +0000 UTC Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.561986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562109 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.562170 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665167 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.665193 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768368 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768418 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.768446 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870659 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870719 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.870769 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.974967 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:39 crc kubenswrapper[5010]: I0203 10:03:39.975140 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:39Z","lastTransitionTime":"2026-02-03T10:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078905 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.078926 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.182465 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.182856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183317 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.183563 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286289 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.286847 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.392856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.393235 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.393259 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496348 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496375 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.496400 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.501695 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:40 crc kubenswrapper[5010]: E0203 10:03:40.501792 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.519530 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.525596 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:12:21.273161983 +0000 UTC Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.532830 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.546401 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.558113 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.568955 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.580647 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.596610 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.599240 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.606488 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.618520 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.675720 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.685108 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.699768 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701457 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.701480 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.711071 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.722901 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.734434 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.744111 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.754009 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.764151 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:40Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803528 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803593 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.803603 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906463 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906548 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906568 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:40 crc kubenswrapper[5010]: I0203 10:03:40.906580 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:40Z","lastTransitionTime":"2026-02-03T10:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009338 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009390 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009405 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.009444 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113243 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113333 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113355 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.113373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216566 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216650 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216673 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.216690 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.319949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.319992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.320031 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426562 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426636 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.426651 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502763 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.502103 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502883 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:41 crc kubenswrapper[5010]: E0203 10:03:41.502634 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.527379 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:20:08.914653323 +0000 UTC Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529230 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529241 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.529264 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632802 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.632869 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735445 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735456 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.735484 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839185 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839281 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839301 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.839315 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941743 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941789 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:41 crc kubenswrapper[5010]: I0203 10:03:41.941815 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:41Z","lastTransitionTime":"2026-02-03T10:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044805 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.044945 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.045065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.045161 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148035 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148100 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.148109 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250530 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.250603 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353237 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.353326 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456691 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.456978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.457104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.457180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.501375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:42 crc kubenswrapper[5010]: E0203 10:03:42.501524 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.528265 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:50:19.647605731 +0000 UTC Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559761 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559817 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559839 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.559851 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661477 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661489 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.661499 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764525 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764617 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.764635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866844 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866904 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.866914 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969853 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969917 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969934 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:42 crc kubenswrapper[5010]: I0203 10:03:42.969983 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:42Z","lastTransitionTime":"2026-02-03T10:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.072986 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.073109 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176451 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176724 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176795 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.176933 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280698 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280770 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280809 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.280864 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384609 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384633 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.384643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487619 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487663 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487690 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.487703 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501261 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501371 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.501425 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501384 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501632 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:43 crc kubenswrapper[5010]: E0203 10:03:43.501919 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.529702 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:37:43.618484392 +0000 UTC Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591276 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591330 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591361 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.591373 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694560 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.694599 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797607 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797729 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797756 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.797776 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901420 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901442 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:43 crc kubenswrapper[5010]: I0203 10:03:43.901451 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:43Z","lastTransitionTime":"2026-02-03T10:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.003951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004022 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004066 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.004084 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108007 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108065 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.108075 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211721 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211833 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.211895 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315878 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315963 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.315990 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.316021 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.316057 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419474 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.419511 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.502204 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:44 crc kubenswrapper[5010]: E0203 10:03:44.502407 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521784 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521858 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.521868 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.529779 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:46:46.767642633 +0000 UTC Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624200 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624284 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.624294 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.726920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.726996 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727020 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727048 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.727067 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829347 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829369 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.829378 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932166 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932188 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:44 crc kubenswrapper[5010]: I0203 10:03:44.932246 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:44Z","lastTransitionTime":"2026-02-03T10:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034418 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034450 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034459 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.034482 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137162 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.137191 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239334 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239365 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.239390 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.341949 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342014 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342031 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.342043 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444659 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444854 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.444911 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.445016 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.445099 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.501965 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.502186 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.501623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:45 crc kubenswrapper[5010]: E0203 10:03:45.502443 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.530155 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:22:57.517874439 +0000 UTC Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547145 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547164 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.547177 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650200 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650316 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650369 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.650390 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753712 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753726 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.753751 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856431 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856759 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.856944 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.857041 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960484 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:45 crc kubenswrapper[5010]: I0203 10:03:45.960744 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:45Z","lastTransitionTime":"2026-02-03T10:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063804 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.063920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.064041 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.064151 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167280 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167574 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.167860 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.168100 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.271620 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.271823 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272036 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272162 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.272352 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374257 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374311 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374327 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.374339 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476629 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.476665 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.502156 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:46 crc kubenswrapper[5010]: E0203 10:03:46.502318 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.531234 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:04:41.075540244 +0000 UTC Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579128 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579193 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579233 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.579243 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681616 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.681650 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784536 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784580 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784591 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784606 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.784617 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887298 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887312 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.887323 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990453 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990503 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:46 crc kubenswrapper[5010]: I0203 10:03:46.990528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:46Z","lastTransitionTime":"2026-02-03T10:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093341 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093422 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093443 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093471 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.093494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196708 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196762 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196778 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.196818 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299581 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.299643 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402845 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402857 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.402887 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.501609 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.501765 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.501863 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.501942 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.502009 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.502074 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505639 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505652 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505666 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.505680 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.532155 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:51:15.085046302 +0000 UTC Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607894 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607951 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.607988 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710515 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710573 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710625 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.710646 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813094 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813133 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813144 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.813168 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898669 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.898702 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.910188 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914069 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.914147 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.928625 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932117 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932183 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932191 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932204 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.932236 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.945400 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948667 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948711 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948723 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.948752 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.961527 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.964985 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965049 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965074 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.965090 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.976834 5010 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5c3370a1-7640-4a44-9e90-cab33c833dc6\\\",\\\"systemUUID\\\":\\\"83993284-2ce8-4ad1-9fe3-91205d527513\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:47Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:47 crc kubenswrapper[5010]: E0203 10:03:47.976972 5010 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978346 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978377 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978410 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978428 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:47 crc kubenswrapper[5010]: I0203 10:03:47.978440 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:47Z","lastTransitionTime":"2026-02-03T10:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081303 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081315 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.081342 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185594 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185721 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.185854 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288768 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288876 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.288998 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.289132 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392067 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392083 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392111 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.392126 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494314 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494342 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494352 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494366 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.494375 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.502559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:48 crc kubenswrapper[5010]: E0203 10:03:48.502685 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.533180 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:16:22.335549283 +0000 UTC Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597469 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597551 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597582 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597610 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.597630 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700523 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700579 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700587 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700614 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.700624 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803702 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803713 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.803749 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906412 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906492 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906518 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:48 crc kubenswrapper[5010]: I0203 10:03:48.906571 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:48Z","lastTransitionTime":"2026-02-03T10:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135822 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135859 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135870 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.135896 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.238928 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.238980 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239008 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.239052 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341779 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341834 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341852 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.341864 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444512 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444546 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444555 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444569 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.444578 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501401 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.501541 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501551 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.501646 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.501843 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.502492 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.503390 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:03:49 crc kubenswrapper[5010]: E0203 10:03:49.505491 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.520261 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.533646 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:43:03.463764154 +0000 UTC Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548175 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548252 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548268 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548294 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.548310 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652172 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652229 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652238 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652253 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.652262 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755440 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755517 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755557 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755588 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.755612 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858618 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858661 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858671 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858687 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.858696 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963599 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963637 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963647 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963664 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:49 crc kubenswrapper[5010]: I0203 10:03:49.963675 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:49Z","lastTransitionTime":"2026-02-03T10:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.066914 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067018 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067085 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.067112 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169423 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169461 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169470 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.169515 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272124 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272151 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272171 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.272180 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374077 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374124 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374137 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374154 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.374163 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476585 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476596 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476608 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.476618 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.502012 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:50 crc kubenswrapper[5010]: E0203 10:03:50.502243 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.519629 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.534569 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:40:58.691863461 +0000 UTC Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.535790 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c773dd46f854fe2fc85442f0f9214a8e28c372105c4b12a5ed3542f1a3034601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.550160 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f5tpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:03:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:23Z\\\",\\\"message\\\":\\\"2026-02-03T10:02:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7\\\\n2026-02-03T10:02:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_82399f8b-e1ce-4e52-8fa2-1fd2aa007ec7 to /host/opt/cni/bin/\\\\n2026-02-03T10:02:38Z [verbose] multus-daemon started\\\\n2026-02-03T10:02:38Z [verbose] Readiness Indicator file check\\\\n2026-02-03T10:03:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:03:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f57xn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f5tpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.561575 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e607e2ef-d3d6-4db0-b514-0d5321d9d28d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://818aa7f3cd84df63dc2d5dcdbfd02a158e4e3bc19c467dda9110763b7f7fe57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mclqv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s4xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579044 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579092 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579104 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579141 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.579598 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T10:03:35Z\\\",\\\"message\\\":\\\"omment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 10:03:35.411596 7160 services_controller.go:451] Built service openshift-marketplace/certified-operators cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 10:03:35.411611 7160 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:03:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xwzz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68p7p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.590013 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7lfkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a594fab0-c299-4489-be04-95a81c6dd272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5995732384ccbbccf9c7e284b151c07b7195fe00d12b1118b06ff883f3fabc6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llslg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7lfkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.638136 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478f7c29-f920-438f-bd2f-834ad379acce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33c6da9549a593611fce2b9ac2e1730afa277e407ab3d553648c86cca72df9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf4d9a81d55d48408fc220d27652171a691f91f84894a35677f27f1ea9beaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9336946ed9378970e4cf4204dae54c84331a56d8bb0c34a96a18756a03564c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32bb7e23791044ac62b774a809eefec90c37195581f3a062ec0328a0f3156771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64179c9dc656cd2ae54ef87a2dd73427521252105f7f7db946b69951cf308654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd3172dc98f9bd36f672f65272b6ef0548d5ab55e45c8d1c3309735fc3d20a46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a84d597354ad5b8f4b36049c29ec5bef9982f82c988bba69e9fbc77958032e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5ba2a290693520ab1c03bfcf9baa02768d6112f452c205d187b827ec065860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.674733 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.681978 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682042 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682054 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682070 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.682080 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.686973 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clvdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081d0234-b506-49ff-81c9-c535f6e1c588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6rrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clvdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.701505 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 10:02:13.925307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 10:02:13.927134 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1926052719/tls.crt::/tmp/serving-cert-1926052719/tls.key\\\\\\\"\\\\nI0203 10:02:29.337292 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 10:02:29.340770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 10:02:29.340802 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 10:02:29.340836 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 10:02:29.340845 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 10:02:29.352240 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 10:02:29.352267 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352274 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 10:02:29.352279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 10:02:29.352283 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 10:02:29.352286 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 10:02:29.352290 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 10:02:29.352303 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 10:02:29.355285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.715307 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.728991 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72afd87a-e015-418a-a135-cb8f7e4b5874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67df496c994dcd1a4db0a0020e9418d343a9cf6213129b710d7aedbc8e937b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b03e3ed2e0087b94deaf28745e586ddbbd7546c8471dcf0ec0ced53a8c0b052f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41768635703e9a6b2bf4db506005d8f5584a33dc6baa50017200b4244e258e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da668c2a906e023b7095232872d6279efb6531c7dc7f21842e41351222e446db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.744062 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d0f0ab90f05184cd6b0babb3d2054049c59b865919df0183aea79ba27ce8569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.757787 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bde7a589-c2e8-48b2-aa06-2fb99731df31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd92ba9459cfa304834ad3741979187ec71c431f81f49a7fb80cc0a2fd7fc4af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b350689945fd5de7d170e2294cc09dbddd0d2b106fae67b673404a397358939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4vzdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.769288 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d3dd09d-110c-4712-9d1b-d7946d168bbf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25477c6ea277d8a685b77167aab64449e8d3be6ac2a737435f708a81bc183d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://113769d25258b4f26c6178b7eae6a036d90ad158c8ffff23f0bd835efd9c1c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.782749 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"890c4139-039f-487f-90ed-68f8e2ee0942\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401e877c22f8555c0c988f9fcc46844220379bb41035188f9a2130b26ab4264b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed59e53eba1fd815b496a61f7bfe2e2a897ce2a685cd761bc32766bd29a02868\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f949e1d97b3ac694ee21b442409a0c0c498deb5f7e2fc9bbd5c46cba1e4636f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784392 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784408 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.784447 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.798715 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cvpds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5c4274d-0165-4762-850f-b2a2ceb57c0b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ee9167336f839f34e5b24d7e10102373f53d24572964114c48c0d7dedee6623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3ece08f39ccece7747619bfd83c20c6c5d2a063d7dbeef01be80414d6000a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bc206816cf1d464b395a0c5423001284e66e5374e98859b128dc8105861ddeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2633da4790664f185d3016e992288dd846dada5602a5d030e250f75d74938fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://443b223b3391fb015901858f11627ff819b74c8f50cc569df95f8e380b4aea5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32efe176066ce43e2f08564f04fcc3b8c99ed8f9b5dfc61d1f9134fc6b9cb8f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da864b6ae4d1952f16aaf8d00242954da11d0c1fc0116cbcac4b1921f329381d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:02:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nmmvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cvpds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.814090 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d456b72e9e512ae75b54e3765f1f171666840db59a2acfe6bcf9d0bf0c0f945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01dd46b43bbb50c79bf5ef997d1e0f88c12a5bfd8eb2d3ee28a2d1546a6b9436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.826957 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-89h2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab56d94-9407-4305-9e87-55e378a0878f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T10:02:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5fbb0c72c690409220edd6589334fc958b1432a78d9a41ec1762ade32acfb4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:02:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6l8d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T10:02:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-89h2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T10:03:50Z is after 2025-08-24T17:21:41Z" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887056 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887120 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887142 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.887151 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990038 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:50 crc kubenswrapper[5010]: I0203 10:03:50.990197 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:50Z","lastTransitionTime":"2026-02-03T10:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092766 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092828 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.092841 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195406 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195448 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195460 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.195489 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297940 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.297992 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.298004 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.298015 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400826 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400841 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.400886 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501334 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.501305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501487 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501519 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:51 crc kubenswrapper[5010]: E0203 10:03:51.501584 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502651 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502675 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502683 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.502704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.535362 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:23:13.915078467 +0000 UTC Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605812 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605861 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605890 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.605904 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709040 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709107 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709127 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.709144 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812407 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812504 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812519 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812544 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.812561 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915265 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915325 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915336 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915356 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:51 crc kubenswrapper[5010]: I0203 10:03:51.915367 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:51Z","lastTransitionTime":"2026-02-03T10:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017603 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017644 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017655 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017674 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.017688 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120024 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120071 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120086 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120106 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.120121 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222159 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222254 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222269 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.222322 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324421 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324462 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324473 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.324503 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427228 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427245 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.427272 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.501846 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:52 crc kubenswrapper[5010]: E0203 10:03:52.502048 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529134 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529164 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.529175 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.536441 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:16:33.453514951 +0000 UTC Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632013 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632089 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632110 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632126 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.632136 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734935 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734981 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.734993 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.735012 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.735025 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.837881 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.837973 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838000 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838030 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.838055 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940201 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940240 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:52 crc kubenswrapper[5010]: I0203 10:03:52.940253 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:52Z","lastTransitionTime":"2026-02-03T10:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043541 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043602 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043612 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043626 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.043635 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145290 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145309 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.145318 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247578 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247642 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247679 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.247693 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349873 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349898 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349922 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.349937 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452429 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452472 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452485 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452502 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.452513 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501125 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.501279 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501332 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.501378 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.537043 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:23:10.316776404 +0000 UTC Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554741 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554752 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554767 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.554778 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657189 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657202 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657236 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.657249 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.728808 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.729005 5010 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:53 crc kubenswrapper[5010]: E0203 10:03:53.729086 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs podName:081d0234-b506-49ff-81c9-c535f6e1c588 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:57.729066997 +0000 UTC m=+167.885043126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs") pod "network-metrics-daemon-clvdz" (UID: "081d0234-b506-49ff-81c9-c535f6e1c588") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758947 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758959 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758976 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.758986 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861449 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861479 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861487 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861500 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.861509 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964121 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964194 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:53 crc kubenswrapper[5010]: I0203 10:03:53.964206 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:53Z","lastTransitionTime":"2026-02-03T10:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066738 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066765 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066775 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066788 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.066796 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199360 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199370 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199384 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.199394 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301811 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301903 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301920 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301938 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.301949 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404091 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404138 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404150 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.404179 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.502096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:54 crc kubenswrapper[5010]: E0203 10:03:54.502281 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505632 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505660 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505668 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505678 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.505688 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.537686 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:46:56.072638831 +0000 UTC Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608199 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608255 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608264 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608278 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.608287 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711435 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711501 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711524 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.711541 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814051 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814084 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814093 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814108 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.814120 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916468 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:54 crc kubenswrapper[5010]: I0203 10:03:54.916506 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:54Z","lastTransitionTime":"2026-02-03T10:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018855 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018874 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.018885 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121624 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121677 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121692 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.121704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223798 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223885 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223909 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.223928 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326610 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326658 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326670 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326685 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.326695 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430039 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430119 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430132 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430149 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.430165 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501461 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501461 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.501858 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502062 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502137 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:55 crc kubenswrapper[5010]: E0203 10:03:55.502295 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532256 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532297 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532307 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.532331 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.538323 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:59:35.030311233 +0000 UTC Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635447 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.635515 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738814 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738856 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738865 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.738892 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841654 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841706 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841718 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841735 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.841748 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944796 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944830 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944837 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944850 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:55 crc kubenswrapper[5010]: I0203 10:03:55.944860 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:55Z","lastTransitionTime":"2026-02-03T10:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047491 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047532 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047542 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047559 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.047570 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149781 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149824 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149836 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149849 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.149858 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252480 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252527 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252538 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252575 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.252587 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.354984 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355079 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355160 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355302 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.355340 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.457534 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.457831 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458076 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458261 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.458411 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.501169 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:56 crc kubenswrapper[5010]: E0203 10:03:56.501488 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.539125 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:30:11.449634721 +0000 UTC Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561275 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561321 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561337 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561357 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.561375 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663386 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663436 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663454 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663476 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.663494 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765733 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765797 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765907 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.765987 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868819 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868868 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868882 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868901 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.868916 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970426 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970466 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970478 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970493 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:56 crc kubenswrapper[5010]: I0203 10:03:56.970502 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:56Z","lastTransitionTime":"2026-02-03T10:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.072843 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073115 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073372 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073693 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.073873 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176136 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176165 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176174 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176187 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.176196 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.277983 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278033 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278052 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278073 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.278086 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380011 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380322 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380481 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380586 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.380704 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484068 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484143 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484155 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484170 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.484185 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.501849 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.501962 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.502011 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.502066 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.502580 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:57 crc kubenswrapper[5010]: E0203 10:03:57.502907 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.540010 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:43:06.482694299 +0000 UTC Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586286 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586331 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586340 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586354 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.586364 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688438 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688483 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688496 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688514 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.688528 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790766 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790782 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790803 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.790819 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893665 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893700 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893707 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893720 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.893728 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996157 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996432 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996526 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996658 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:57 crc kubenswrapper[5010]: I0203 10:03:57.996752 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:57Z","lastTransitionTime":"2026-02-03T10:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099097 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099434 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099505 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099583 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.099656 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196731 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196771 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196780 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196794 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.196805 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221130 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221168 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221179 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221197 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.221232 5010 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T10:03:58Z","lastTransitionTime":"2026-02-03T10:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.243885 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2"] Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.244321 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.246857 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247279 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247371 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.247371 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.280732 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-89h2z" podStartSLOduration=83.280708569 podStartE2EDuration="1m23.280708569s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.263449793 +0000 UTC m=+108.419425932" watchObservedRunningTime="2026-02-03 10:03:58.280708569 +0000 UTC m=+108.436684698" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.294970 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cvpds" podStartSLOduration=83.294950558 podStartE2EDuration="1m23.294950558s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.281027678 +0000 UTC m=+108.437003817" watchObservedRunningTime="2026-02-03 10:03:58.294950558 +0000 UTC m=+108.450926697" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.347815 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-f5tpq" podStartSLOduration=83.347797316 podStartE2EDuration="1m23.347797316s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.347248381 +0000 UTC m=+108.503224520" watchObservedRunningTime="2026-02-03 10:03:58.347797316 +0000 UTC m=+108.503773445" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.360299 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podStartSLOduration=83.360278095 podStartE2EDuration="1m23.360278095s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.360087339 +0000 UTC m=+108.516063478" watchObservedRunningTime="2026-02-03 10:03:58.360278095 +0000 UTC m=+108.516254224" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375054 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375083 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.375191 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.391080 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7lfkq" podStartSLOduration=83.391059799 podStartE2EDuration="1m23.391059799s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.390946336 +0000 UTC m=+108.546922485" watchObservedRunningTime="2026-02-03 10:03:58.391059799 +0000 UTC m=+108.547035948" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.416838 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.416814729 podStartE2EDuration="9.416814729s" podCreationTimestamp="2026-02-03 10:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.416440898 +0000 UTC m=+108.572417037" watchObservedRunningTime="2026-02-03 10:03:58.416814729 +0000 UTC m=+108.572790858" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.463939 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.463921572 podStartE2EDuration="1m29.463921572s" podCreationTimestamp="2026-02-03 10:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.463879521 +0000 UTC m=+108.619855660" watchObservedRunningTime="2026-02-03 10:03:58.463921572 +0000 UTC m=+108.619897701" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476349 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476405 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476433 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476619 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.476664 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.477261 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.481941 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.481923619 podStartE2EDuration="1m25.481923619s" podCreationTimestamp="2026-02-03 10:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.481493867 +0000 UTC m=+108.637469996" watchObservedRunningTime="2026-02-03 10:03:58.481923619 +0000 UTC m=+108.637899748" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.482292 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.493902 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.493885053 podStartE2EDuration="59.493885053s" podCreationTimestamp="2026-02-03 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.493526983 +0000 UTC m=+108.649503112" watchObservedRunningTime="2026-02-03 10:03:58.493885053 +0000 UTC m=+108.649861182" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.496599 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jl5t2\" (UID: \"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.501375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:03:58 crc kubenswrapper[5010]: E0203 10:03:58.501511 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.535735 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.535714255 podStartE2EDuration="37.535714255s" podCreationTimestamp="2026-02-03 10:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.535232131 +0000 UTC m=+108.691208280" watchObservedRunningTime="2026-02-03 10:03:58.535714255 +0000 UTC m=+108.691690374" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.536172 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4vzdl" podStartSLOduration=83.536167578 podStartE2EDuration="1m23.536167578s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:58.52442644 +0000 UTC m=+108.680402579" watchObservedRunningTime="2026-02-03 10:03:58.536167578 +0000 UTC m=+108.692143707" Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.541187 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:12:25.677292201 +0000 UTC Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.541277 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.547684 5010 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 10:03:58 crc kubenswrapper[5010]: I0203 10:03:58.562001 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.167114 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" event={"ID":"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50","Type":"ContainerStarted","Data":"7b807cb4be28218027fc16855c54c087d9ae8be394606a21c1308e9f78a83a93"} Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.167168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" event={"ID":"6657e7d5-f3b2-4194-a82e-f2e4ca2f0b50","Type":"ContainerStarted","Data":"be38329100afa7716b13b0d201891bde5a0caebc37836d91c2e14cf54d247542"} Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501244 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501655 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501465 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501745 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:03:59 crc kubenswrapper[5010]: I0203 10:03:59.501386 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:03:59 crc kubenswrapper[5010]: E0203 10:03:59.501825 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:00 crc kubenswrapper[5010]: I0203 10:04:00.502110 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:00 crc kubenswrapper[5010]: E0203 10:04:00.503645 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:00 crc kubenswrapper[5010]: I0203 10:04:00.504014 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:00 crc kubenswrapper[5010]: E0203 10:04:00.504325 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502291 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502405 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502516 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:01 crc kubenswrapper[5010]: I0203 10:04:01.502527 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502591 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:01 crc kubenswrapper[5010]: E0203 10:04:01.502865 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:02 crc kubenswrapper[5010]: I0203 10:04:02.501513 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:02 crc kubenswrapper[5010]: E0203 10:04:02.501663 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501851 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:03 crc kubenswrapper[5010]: I0203 10:04:03.501938 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502082 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502514 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:03 crc kubenswrapper[5010]: E0203 10:04:03.502877 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:04 crc kubenswrapper[5010]: I0203 10:04:04.502199 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:04 crc kubenswrapper[5010]: E0203 10:04:04.502384 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501127 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501147 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.501315 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.501480 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:05 crc kubenswrapper[5010]: I0203 10:04:05.501765 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:05 crc kubenswrapper[5010]: E0203 10:04:05.502024 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:06 crc kubenswrapper[5010]: I0203 10:04:06.501685 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:06 crc kubenswrapper[5010]: E0203 10:04:06.502513 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502127 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502163 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:07 crc kubenswrapper[5010]: I0203 10:04:07.502206 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502364 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502560 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:07 crc kubenswrapper[5010]: E0203 10:04:07.502750 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:08 crc kubenswrapper[5010]: I0203 10:04:08.501198 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:08 crc kubenswrapper[5010]: E0203 10:04:08.501372 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501916 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501964 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:09 crc kubenswrapper[5010]: I0203 10:04:09.501962 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502131 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502183 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:09 crc kubenswrapper[5010]: E0203 10:04:09.502248 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.449185 5010 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 10:04:10 crc kubenswrapper[5010]: I0203 10:04:10.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.502955 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:10 crc kubenswrapper[5010]: E0203 10:04:10.601124 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.206176 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207131 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/0.log" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207326 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" exitCode=1 Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207399 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe"} Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.207483 5010 scope.go:117] "RemoveContainer" containerID="b4694d69d81aa2c19ed29c21d07298a0c2e43af1189c7318dd0204a0880aed2a" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.208327 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.208803 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-f5tpq_openshift-multus(8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef)\"" pod="openshift-multus/multus-f5tpq" podUID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.235095 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jl5t2" podStartSLOduration=96.235077852 podStartE2EDuration="1m36.235077852s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:03:59.182642359 +0000 UTC m=+109.338618508" watchObservedRunningTime="2026-02-03 10:04:11.235077852 +0000 UTC m=+121.391053981" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502033 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.502108 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502659 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502811 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.502898 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:11 crc kubenswrapper[5010]: I0203 10:04:11.503118 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:11 crc kubenswrapper[5010]: E0203 10:04:11.503377 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68p7p_openshift-ovn-kubernetes(afbb630a-0dee-4c9c-90ff-cb710b9da3f2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" Feb 03 10:04:12 crc kubenswrapper[5010]: I0203 10:04:12.213134 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:12 crc kubenswrapper[5010]: I0203 10:04:12.501290 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:12 crc kubenswrapper[5010]: E0203 10:04:12.501479 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502129 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:13 crc kubenswrapper[5010]: I0203 10:04:13.501784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502195 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:13 crc kubenswrapper[5010]: E0203 10:04:13.502375 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:14 crc kubenswrapper[5010]: I0203 10:04:14.502171 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:14 crc kubenswrapper[5010]: E0203 10:04:14.502476 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.501554 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.501607 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.501684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.501878 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:15 crc kubenswrapper[5010]: I0203 10:04:15.502123 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.502392 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:15 crc kubenswrapper[5010]: E0203 10:04:15.602863 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:16 crc kubenswrapper[5010]: I0203 10:04:16.501728 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:16 crc kubenswrapper[5010]: E0203 10:04:16.501860 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.501961 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.502013 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:17 crc kubenswrapper[5010]: I0203 10:04:17.501965 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502161 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502339 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:17 crc kubenswrapper[5010]: E0203 10:04:17.502398 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:18 crc kubenswrapper[5010]: I0203 10:04:18.502037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:18 crc kubenswrapper[5010]: E0203 10:04:18.502153 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501688 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501812 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.501909 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:19 crc kubenswrapper[5010]: I0203 10:04:19.501957 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.502004 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:19 crc kubenswrapper[5010]: E0203 10:04:19.502067 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:20 crc kubenswrapper[5010]: I0203 10:04:20.502026 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:20 crc kubenswrapper[5010]: E0203 10:04:20.503091 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:20 crc kubenswrapper[5010]: E0203 10:04:20.603622 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.501953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.502019 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:21 crc kubenswrapper[5010]: I0203 10:04:21.502106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502154 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502387 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:21 crc kubenswrapper[5010]: E0203 10:04:21.502490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:22 crc kubenswrapper[5010]: I0203 10:04:22.501618 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:22 crc kubenswrapper[5010]: E0203 10:04:22.501733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502080 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502115 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.502368 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.502408 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.503045 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:23 crc kubenswrapper[5010]: E0203 10:04:23.503168 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:23 crc kubenswrapper[5010]: I0203 10:04:23.503615 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.254842 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.259478 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerStarted","Data":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.260450 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.290197 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podStartSLOduration=109.290175377 podStartE2EDuration="1m49.290175377s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:24.289360244 +0000 UTC m=+134.445336383" watchObservedRunningTime="2026-02-03 10:04:24.290175377 +0000 UTC m=+134.446151516" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.307201 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.307329 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:24 crc kubenswrapper[5010]: E0203 10:04:24.307433 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.503501 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:24 crc kubenswrapper[5010]: E0203 10:04:24.503696 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:24 crc kubenswrapper[5010]: I0203 10:04:24.503820 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.264448 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.264799 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4"} Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.501415 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:25 crc kubenswrapper[5010]: I0203 10:04:25.501479 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.501856 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.501861 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:25 crc kubenswrapper[5010]: E0203 10:04:25.604909 5010 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 10:04:26 crc kubenswrapper[5010]: I0203 10:04:26.501200 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:26 crc kubenswrapper[5010]: E0203 10:04:26.501588 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:26 crc kubenswrapper[5010]: I0203 10:04:26.501249 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:26 crc kubenswrapper[5010]: E0203 10:04:26.501693 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:27 crc kubenswrapper[5010]: I0203 10:04:27.501123 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:27 crc kubenswrapper[5010]: I0203 10:04:27.501195 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:27 crc kubenswrapper[5010]: E0203 10:04:27.501324 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:27 crc kubenswrapper[5010]: E0203 10:04:27.501449 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:28 crc kubenswrapper[5010]: I0203 10:04:28.502048 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:28 crc kubenswrapper[5010]: I0203 10:04:28.502096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:28 crc kubenswrapper[5010]: E0203 10:04:28.502205 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:28 crc kubenswrapper[5010]: E0203 10:04:28.502362 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:29 crc kubenswrapper[5010]: I0203 10:04:29.502027 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:29 crc kubenswrapper[5010]: I0203 10:04:29.502038 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:29 crc kubenswrapper[5010]: E0203 10:04:29.502246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 10:04:29 crc kubenswrapper[5010]: E0203 10:04:29.502302 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 10:04:30 crc kubenswrapper[5010]: I0203 10:04:30.503035 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:30 crc kubenswrapper[5010]: I0203 10:04:30.503050 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:30 crc kubenswrapper[5010]: E0203 10:04:30.506801 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clvdz" podUID="081d0234-b506-49ff-81c9-c535f6e1c588" Feb 03 10:04:30 crc kubenswrapper[5010]: E0203 10:04:30.506942 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.502010 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.502088 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.505531 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 10:04:31 crc kubenswrapper[5010]: I0203 10:04:31.505724 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.502656 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.502674 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.504752 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.505560 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.505622 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 10:04:32 crc kubenswrapper[5010]: I0203 10:04:32.506948 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380520 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:37 crc kubenswrapper[5010]: E0203 10:04:37.380704 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:06:39.38067189 +0000 UTC m=+269.536648029 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.380884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.381996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.390241 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.481577 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.481635 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.485060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.485989 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.516845 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.523945 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:37 crc kubenswrapper[5010]: I0203 10:04:37.622978 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 10:04:37 crc kubenswrapper[5010]: W0203 10:04:37.816553 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189 WatchSource:0}: Error finding container 5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189: Status 404 returned error can't find the container with id 5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189 Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.303987 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3413bfbed34b65e745726b9346066c38fd2609458111021ec8f48d5f4b46a753"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.304049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5e1a7306731dd81d301834454f60668151e902006b4113f2287a12ec90905189"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.310514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0376cc375a0e1e8c69dd83f5dd576d65d1cf311b80f2b866b444b1e0575da47d"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.310567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc517f5913017e8b7d1def57ce7587beb16dbbf0da5f1d454399fb8949116309"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1fa09b9e7974cb2dcc26ee6df62c655a70c382f980a0b20d974477d4a1ec12a"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311671 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"56efd723615985c2b4f0ba50cd95709e1b969ff835681c0261c48845a408dc40"} Feb 03 10:04:38 crc kubenswrapper[5010]: I0203 10:04:38.311842 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.336113 5010 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.369076 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.369581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.370036 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.370730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371045 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371657 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371863 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.371923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372227 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372306 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372629 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.372797 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.377966 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380403 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380422 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.380439 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381249 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381265 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381284 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381409 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381532 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381838 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381911 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381933 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381990 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.381997 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382194 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382296 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382355 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382430 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.382470 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384644 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384817 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.384955 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.393568 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.394363 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.395301 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.395741 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.397400 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.397841 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.398360 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.398973 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403827 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.403995 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.404055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.404099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405208 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405576 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405612 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405655 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405784 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405925 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.405949 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406171 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406189 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406411 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406470 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406640 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406681 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406701 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406734 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.406780 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.407577 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.407799 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.410258 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.413045 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.432191 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433071 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433383 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.433802 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.434704 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.434961 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.435263 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.435483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436118 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436269 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436302 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.436738 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438289 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438459 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438508 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438568 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438660 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438740 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438817 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.438886 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.438960 5010 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.438990 5010 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439059 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439931 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.439999 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440025 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440099 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440163 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440176 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440205 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440291 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440375 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440451 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440550 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440625 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440679 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440749 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.440808 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441066 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441178 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441193 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441235 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441312 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441363 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441387 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.441567 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.442376 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.443931 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.445248 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.449334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.450617 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.450963 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.451394 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.454610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.455957 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.461783 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462129 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462151 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462426 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.462462 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.473499 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.474435 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.475178 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.479554 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481627 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481866 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.481898 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.485899 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.489306 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.496694 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.497276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.497471 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.508896 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509165 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509298 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509476 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509688 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.509699 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510670 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510694 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510715 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510738 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510780 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510873 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510895 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510915 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510936 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.510980 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511001 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511022 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511053 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511074 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511094 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511143 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511167 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511259 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511489 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.511977 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-serving-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512278 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512328 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.512350 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.012337716 +0000 UTC m=+150.168313935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512473 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512520 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512546 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512571 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512584 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.512616 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513010 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513260 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513414 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513793 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513836 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513792 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.513989 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.514797 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515073 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515457 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515461 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515710 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-audit-dir\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.515997 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516053 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516235 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23cdf53e-881f-4cf2-b557-e087a017b7ec-auth-proxy-config\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516722 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516732 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516762 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516858 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516941 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.516987 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517013 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517086 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517133 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517161 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.517185 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.518826 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.519923 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.519960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520004 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520105 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520268 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521755 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-etcd-client\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.520139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521831 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521893 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521915 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521944 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521966 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521982 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522020 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.522067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524000 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524113 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.521912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524353 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/23cdf53e-881f-4cf2-b557-e087a017b7ec-machine-approver-tls\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524517 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-audit\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524648 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-encryption-config\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524824 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cf586c8c-c859-44a2-9b28-16708745cda1-image-import-ca\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524848 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.524919 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.523955 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf586c8c-c859-44a2-9b28-16708745cda1-node-pullsecrets\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.525876 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526367 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526848 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf586c8c-c859-44a2-9b28-16708745cda1-serving-cert\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.526931 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527117 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-whpdl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.527899 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.528256 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.528760 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.529153 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.529652 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.530176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.530907 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.531391 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.531399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.532267 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.532338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.533361 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.533466 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534160 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534506 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534656 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.534857 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.535182 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.536727 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.539699 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540459 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540703 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.540877 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.543315 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544065 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544593 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.544924 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.545012 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.545799 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.547461 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.549829 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.549870 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.554633 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.564266 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.569278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.599279 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.599340 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.601936 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.603166 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.606980 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"route-controller-manager-6576b87f9c-qgmq6\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.607504 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-77jcb"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.608322 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.608747 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7m8\" (UniqueName: \"kubernetes.io/projected/cf586c8c-c859-44a2-9b28-16708745cda1-kube-api-access-7d7m8\") pod \"apiserver-76f77b778f-9lvbs\" (UID: \"cf586c8c-c859-44a2-9b28-16708745cda1\") " pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.609021 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.610165 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.611165 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.612183 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.613149 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.617045 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.618510 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.619444 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.623230 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624198 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624454 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624493 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624570 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624612 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624630 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624667 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624698 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624729 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624744 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624820 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624852 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624882 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624912 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624944 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624959 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624976 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.624998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625016 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625032 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625052 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625084 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625115 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625928 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.625952 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633282 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.627579 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.628458 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.128430916 +0000 UTC m=+150.284407045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633342 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633362 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633376 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632883 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.630801 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.632413 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.631315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.633818 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634497 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634620 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.634724 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635157 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635255 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635326 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635349 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635372 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635415 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635581 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635603 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635650 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635676 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635720 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635795 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635821 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635847 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635897 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635921 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635959 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.635981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636002 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636085 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636123 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636150 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636186 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636255 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636314 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636338 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636357 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636395 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636442 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636483 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636575 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636598 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636641 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636712 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636740 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636786 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636837 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636879 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636904 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636968 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.636990 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637064 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637181 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637275 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637298 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637354 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637369 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637381 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637419 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637499 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.637532 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638130 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638166 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638182 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638200 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638228 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638264 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638299 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638397 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638455 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.638533 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.639513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-config\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640025 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640183 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-config\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.640645 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-service-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.641969 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643310 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.643902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291724bc-0382-45d5-a089-356f8e04feb5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.644134 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.644604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45194a2a-320c-439d-9070-2c534070b7e4-metrics-tls\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-config\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645453 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645681 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-serving-cert\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645888 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-ca\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.645929 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2e96179c-7517-40d5-918f-1fc379e16fec-etcd-client\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.646530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.647713 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/291724bc-0382-45d5-a089-356f8e04feb5-serving-cert\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.648252 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.648707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.649266 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.650201 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651151 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad56317f-8d37-4d59-9abe-346b4340a30c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651304 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.651644 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653125 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653345 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.653422 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.654972 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.656866 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658365 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658404 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658473 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.658884 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.659055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.660924 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.663200 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.663680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.665228 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.666270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.671126 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.672141 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.672457 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.675332 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.678621 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.679678 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.679805 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.692615 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.697763 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.711571 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.731776 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739679 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739729 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739777 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739827 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739866 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.739872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740291 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740320 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740351 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740369 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740387 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740436 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740477 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740493 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740511 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740534 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740578 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740601 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740673 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740691 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740736 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740764 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.740867 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.240853273 +0000 UTC m=+150.396829402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.740892 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/433ae711-459e-4627-83c1-0fecfe929c60-audit-dir\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741491 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741718 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741939 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742133 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742945 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743075 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743019 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.742157 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-config\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.741946 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b075f5c7-f95f-4883-8d94-d1b64bc3c451-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743298 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743597 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.743958 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744057 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.744282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745545 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745617 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745767 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745926 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.745994 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746061 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746152 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746246 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746317 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746247 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b693a4b6-8aa6-489e-a797-fa486eab7443-tmpfs\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746463 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746823 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.746920 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747340 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747434 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747459 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-proxy-tls\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747714 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747743 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747798 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747815 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747836 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747885 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747969 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.747992 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748594 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b075f5c7-f95f-4883-8d94-d1b64bc3c451-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.748856 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.749135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e12e505-3d35-4b3e-8015-9e2341d4791e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.750548 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e12e505-3d35-4b3e-8015-9e2341d4791e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.751724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.751850 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.773033 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.787584 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cde7673b-c4b1-4060-86cd-cac7120de9bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.797149 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.801142 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cde7673b-c4b1-4060-86cd-cac7120de9bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.811808 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.831704 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.848596 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.848714 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.348692939 +0000 UTC m=+150.504669068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.848966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.849671 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.349661776 +0000 UTC m=+150.505637965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.851771 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.861530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-images\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.871645 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.871800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9lvbs"] Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.877813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-proxy-tls\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.879541 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf586c8c_c859_44a2_9b28_16708745cda1.slice/crio-dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757 WatchSource:0}: Error finding container dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757: Status 404 returned error can't find the container with id dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757 Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.893896 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.894427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:04:39 crc kubenswrapper[5010]: W0203 10:04:39.904436 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61153282_2bd6_4bbf_a04a_76909b13f961.slice/crio-de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8 WatchSource:0}: Error finding container de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8: Status 404 returned error can't find the container with id de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8 Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.928609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsc2k\" (UniqueName: \"kubernetes.io/projected/23cdf53e-881f-4cf2-b557-e087a017b7ec-kube-api-access-nsc2k\") pod \"machine-approver-56656f9798-sk5mk\" (UID: \"23cdf53e-881f-4cf2-b557-e087a017b7ec\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.947744 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"controller-manager-879f6c89f-lc7dd\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.950121 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.950287 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.450264856 +0000 UTC m=+150.606240985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.950379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:39 crc kubenswrapper[5010]: E0203 10:04:39.950897 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.450885594 +0000 UTC m=+150.606861723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.951858 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.964635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba766e4c-056f-4be6-a4b9-05592b641f87-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.971400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.985286 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.992257 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 10:04:39 crc kubenswrapper[5010]: I0203 10:04:39.998908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-config\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.010193 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.022733 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.025426 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4da6d2c9-755f-44e5-bab0-37cf60ee8378-trusted-ca\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.032205 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.051774 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.052510 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.052598 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.552581705 +0000 UTC m=+150.708557834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.062135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4da6d2c9-755f-44e5-bab0-37cf60ee8378-serving-cert\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.072514 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.092682 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.112868 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.132583 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.144280 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c07afc79-e943-4e79-93ed-8eedd0ade1bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.153478 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.153816 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.653798843 +0000 UTC m=+150.809774972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.154026 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.154110 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.171472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.179626 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.192078 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: W0203 10:04:40.200077 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode27ae235_3c1c_4ee0_85b6_a53477e335e5.slice/crio-8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81 WatchSource:0}: Error finding container 8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81: Status 404 returned error can't find the container with id 8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.212370 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.231584 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.252899 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.254882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.255299 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.755274448 +0000 UTC m=+150.911250577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.272889 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.280906 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-default-certificate\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.292429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.297311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-stats-auth\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.312158 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319166 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerStarted","Data":"815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319229 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerStarted","Data":"de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.319377 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.320469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"de63740c8bff7cdcb85cb9e685ecdbe9ab444131ef57e443aaa8fea303a4459d"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.320513 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"e73bad45656b96d3815aa3ce12b06891b4a27b4089969094ff27b1f088236ebd"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.321114 5010 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qgmq6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.321180 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerDied","Data":"d8e170ae0df330deb0c6596bc5973cb373d32b7634e54c39e7cb19723d18b5aa"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323854 5010 generic.go:334] "Generic (PLEG): container finished" podID="cf586c8c-c859-44a2-9b28-16708745cda1" containerID="d8e170ae0df330deb0c6596bc5973cb373d32b7634e54c39e7cb19723d18b5aa" exitCode=0 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.323943 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"dc0e215632636070f9233c8da5cf61ed4ccec496761b77a3b527af638caff757"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.324543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58ae0ba7-4454-4bec-87ac-432b346ee643-metrics-certs\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325666 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerStarted","Data":"9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerStarted","Data":"8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81"} Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.325910 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.328060 5010 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lc7dd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.328102 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.331230 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.333436 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ae0ba7-4454-4bec-87ac-432b346ee643-service-ca-bundle\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.351772 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.356673 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.357797 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.857785503 +0000 UTC m=+151.013761632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.372254 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.387337 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-srv-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.392766 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.401764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.402311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-profile-collector-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.405791 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.411979 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.416786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-webhook-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.423654 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b693a4b6-8aa6-489e-a797-fa486eab7443-apiservice-cert\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.432124 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.451561 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.457773 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.457885 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.957863017 +0000 UTC m=+151.113839146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.457992 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.458269 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:40.958261588 +0000 UTC m=+151.114237717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.472420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.479728 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-serving-cert\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.492748 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.499485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-config\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.511705 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.532548 5010 request.go:700] Waited for 1.000764512s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.535396 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.541345 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9cddf065-d958-4bf4-b5a8-67321cba2f67-srv-cert\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.553702 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.558948 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.559120 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.059096345 +0000 UTC m=+151.215072484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.559571 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.559952 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.059941569 +0000 UTC m=+151.215917698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.571232 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.591912 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.611581 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.621582 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-etcd-client\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.631581 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.640621 5010 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.640723 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images podName:dc73dc6e-53ff-48b8-932e-d5aeb839f2dd nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.140700165 +0000 UTC m=+151.296676294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images") pod "machine-api-operator-5694c8668f-5mq4r" (UID: "dc73dc6e-53ff-48b8-932e-d5aeb839f2dd") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.641104 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.651980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.655416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-serving-cert\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.660929 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.662479 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.161835196 +0000 UTC m=+151.317811315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.672313 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.676761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/433ae711-459e-4627-83c1-0fecfe929c60-encryption-config\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.692169 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.711846 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.731829 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.735953 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-audit-policies\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741000 5010 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741052 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config podName:effb39d8-ef30-45f3-bf93-b9dbb8de2475 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241038158 +0000 UTC m=+151.397014287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config") pod "kube-controller-manager-operator-78b949d7b-2nxxl" (UID: "effb39d8-ef30-45f3-bf93-b9dbb8de2475") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741057 5010 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741123 5010 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741199 5010 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741139 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics podName:1b5592be-8839-4660-a4c4-ab662fc975eb nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.24111936 +0000 UTC m=+151.397095489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics") pod "marketplace-operator-79b997595-6kg4f" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741304 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle podName:d882e1bb-7ece-45ea-9e5e-0d23f162f06e nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241281505 +0000 UTC m=+151.397257714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle") pod "service-ca-9c57cc56f-c9t7q" (UID: "d882e1bb-7ece-45ea-9e5e-0d23f162f06e") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.741326 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs podName:9fed3a51-8c05-46a7-8057-6839f70b2f22 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.241315656 +0000 UTC m=+151.397291915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs") pod "machine-config-server-77jcb" (UID: "9fed3a51-8c05-46a7-8057-6839f70b2f22") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.744020 5010 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.744086 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca podName:1b5592be-8839-4660-a4c4-ab662fc975eb nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.244070214 +0000 UTC m=+151.400046423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca") pod "marketplace-operator-79b997595-6kg4f" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.745682 5010 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.745772 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume podName:9b9c4aab-790c-4581-bfc2-ad1d7302c704 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.245752192 +0000 UTC m=+151.401728321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume") pod "collect-profiles-29501880-x6pjp" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746783 5010 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746798 5010 secret.go:188] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746866 5010 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746848 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert podName:51fcb019-af4d-4f3d-b1b0-4b4e6761db7c nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246836043 +0000 UTC m=+151.402812252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert") pod "openshift-config-operator-7777fb866f-cp6s5" (UID: "51fcb019-af4d-4f3d-b1b0-4b4e6761db7c") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746929 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config podName:98d0bd22-70a8-4496-9074-3251c15e5b59 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246902885 +0000 UTC m=+151.402879104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config") pod "openshift-controller-manager-operator-756b6f6bc6-m76db" (UID: "98d0bd22-70a8-4496-9074-3251c15e5b59") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.746945 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert podName:effb39d8-ef30-45f3-bf93-b9dbb8de2475 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.246939716 +0000 UTC m=+151.402915845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert") pod "kube-controller-manager-operator-78b949d7b-2nxxl" (UID: "effb39d8-ef30-45f3-bf93-b9dbb8de2475") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.747456 5010 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.747514 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key podName:d882e1bb-7ece-45ea-9e5e-0d23f162f06e nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.247494742 +0000 UTC m=+151.403470941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key") pod "service-ca-9c57cc56f-c9t7q" (UID: "d882e1bb-7ece-45ea-9e5e-0d23f162f06e") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750112 5010 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.748640 5010 secret.go:188] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750191 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca podName:433ae711-459e-4627-83c1-0fecfe929c60 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250165577 +0000 UTC m=+151.406141706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca") pod "apiserver-7bbb656c7d-snrzp" (UID: "433ae711-459e-4627-83c1-0fecfe929c60") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750229 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert podName:98d0bd22-70a8-4496-9074-3251c15e5b59 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250202058 +0000 UTC m=+151.406178187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert") pod "openshift-controller-manager-operator-756b6f6bc6-m76db" (UID: "98d0bd22-70a8-4496-9074-3251c15e5b59") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750350 5010 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.750400 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token podName:9fed3a51-8c05-46a7-8057-6839f70b2f22 nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.250382654 +0000 UTC m=+151.406358783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token") pod "machine-config-server-77jcb" (UID: "9fed3a51-8c05-46a7-8057-6839f70b2f22") : failed to sync secret cache: timed out waiting for the condition Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.751688 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.762922 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.763564 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.263548318 +0000 UTC m=+151.419524447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.772203 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.791848 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.812188 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.831974 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.852950 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.864277 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.864489 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.364458027 +0000 UTC m=+151.520434156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.864972 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.865472 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.365462426 +0000 UTC m=+151.521438555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.872592 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.892009 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.912314 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.931917 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.951781 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.966169 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:40 crc kubenswrapper[5010]: E0203 10:04:40.966832 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.466814077 +0000 UTC m=+151.622790206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.971467 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 10:04:40 crc kubenswrapper[5010]: I0203 10:04:40.993361 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.011836 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.032299 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.052540 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.067666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.068324 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.568310263 +0000 UTC m=+151.724286392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.072295 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.091722 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.112136 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.132242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.152104 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.169587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.169864 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.170492 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.670474457 +0000 UTC m=+151.826450586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.172138 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.192069 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.214616 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.231639 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.264437 5010 csr.go:261] certificate signing request csr-55hvk is approved, waiting to be issued Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.264609 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.267798 5010 csr.go:257] certificate signing request csr-55hvk is issued Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.271949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272040 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272066 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272120 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272277 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272384 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272409 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272447 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272538 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.274555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.272411 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.274646 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/433ae711-459e-4627-83c1-0fecfe929c60-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.275346 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effb39d8-ef30-45f3-bf93-b9dbb8de2475-config\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.275602 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.775589706 +0000 UTC m=+151.931565835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.275729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-cabundle\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.276606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.276838 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.277614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-signing-key\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.278557 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-serving-cert\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.279897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98d0bd22-70a8-4496-9074-3251c15e5b59-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.281745 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/effb39d8-ef30-45f3-bf93-b9dbb8de2475-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.283311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98d0bd22-70a8-4496-9074-3251c15e5b59-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.312765 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.333265 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"0c60082eb619569985a7b2e18cf2135863bc46259049f7f4275c8afcc02527da"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.333345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" event={"ID":"cf586c8c-c859-44a2-9b28-16708745cda1","Type":"ContainerStarted","Data":"cb8e9772c3be3366496706d93d1c3728a070d0862f81a47c07e5217ceaa40dc2"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.335098 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" event={"ID":"23cdf53e-881f-4cf2-b557-e087a017b7ec","Type":"ContainerStarted","Data":"dc4a6ea017a4a42cc8306e1e9e833360ad98ccb50390758b6349fe4e14a23f36"} Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.335980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.340207 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.340375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.348629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-node-bootstrap-token\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.352513 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.359471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9fed3a51-8c05-46a7-8057-6839f70b2f22-certs\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.374198 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.374430 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.874378145 +0000 UTC m=+152.030354284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.374894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.375264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.875243519 +0000 UTC m=+152.031219648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.396306 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk877\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-kube-api-access-fk877\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.411412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.426148 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgh4v\" (UniqueName: \"kubernetes.io/projected/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-kube-api-access-dgh4v\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.450107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"oauth-openshift-558db77b4-rkqd6\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.476660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.476789 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.976769066 +0000 UTC m=+152.132745195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.476963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.477309 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:41.977299011 +0000 UTC m=+152.133275140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.477860 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f59fb23-ca1e-487d-a345-9eada8d1c7a8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bd2tr\" (UID: \"8f59fb23-ca1e-487d-a345-9eada8d1c7a8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.492026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v69f4\" (UniqueName: \"kubernetes.io/projected/2e96179c-7517-40d5-918f-1fc379e16fec-kube-api-access-v69f4\") pod \"etcd-operator-b45778765-6t4bv\" (UID: \"2e96179c-7517-40d5-918f-1fc379e16fec\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.511702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"console-f9d7485db-wtcpj\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.527431 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s54b\" (UniqueName: \"kubernetes.io/projected/291724bc-0382-45d5-a089-356f8e04feb5-kube-api-access-8s54b\") pod \"authentication-operator-69f744f599-bkdmn\" (UID: \"291724bc-0382-45d5-a089-356f8e04feb5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.550393 5010 request.go:700] Waited for 1.905945057s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.552714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6wt\" (UniqueName: \"kubernetes.io/projected/45194a2a-320c-439d-9070-2c534070b7e4-kube-api-access-dc6wt\") pod \"dns-operator-744455d44c-7ztl2\" (UID: \"45194a2a-320c-439d-9070-2c534070b7e4\") " pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.573161 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkpg\" (UniqueName: \"kubernetes.io/projected/ad56317f-8d37-4d59-9abe-346b4340a30c-kube-api-access-lqkpg\") pod \"cluster-samples-operator-665b6dd947-8qfbt\" (UID: \"ad56317f-8d37-4d59-9abe-346b4340a30c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.578587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.578767 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.078717874 +0000 UTC m=+152.234694013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.579089 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.579159 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.579565 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.079553458 +0000 UTC m=+152.235529657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.590470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.597977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.601640 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.611817 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.634330 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.652421 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.657319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.673057 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.680584 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.680816 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.180784706 +0000 UTC m=+152.336760835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.681229 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.681819 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.181803305 +0000 UTC m=+152.337779434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.691595 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.702087 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.713058 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.727878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.745496 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.748347 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.756225 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.759369 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.780695 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.782456 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.782919 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.282905819 +0000 UTC m=+152.438881948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.848436 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.853065 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdntk\" (UniqueName: \"kubernetes.io/projected/4da6d2c9-755f-44e5-bab0-37cf60ee8378-kube-api-access-gdntk\") pod \"console-operator-58897d9998-ljpd5\" (UID: \"4da6d2c9-755f-44e5-bab0-37cf60ee8378\") " pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.884036 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5tz\" (UniqueName: \"kubernetes.io/projected/d8101cd0-5430-4786-bf8a-3d9c60ad1f7d-kube-api-access-5d5tz\") pod \"downloads-7954f5f757-jvtp4\" (UID: \"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d\") " pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.884716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.885185 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.385170297 +0000 UTC m=+152.541146426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.899757 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b075f5c7-f95f-4883-8d94-d1b64bc3c451-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vxlln\" (UID: \"b075f5c7-f95f-4883-8d94-d1b64bc3c451\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.904929 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:04:41 crc kubenswrapper[5010]: W0203 10:04:41.923680 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f7221f_b9e1_45bc_8a9e_2f512c9e457d.slice/crio-e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37 WatchSource:0}: Error finding container e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37: Status 404 returned error can't find the container with id e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37 Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.931741 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lhm\" (UniqueName: \"kubernetes.io/projected/c07afc79-e943-4e79-93ed-8eedd0ade1bc-kube-api-access-q8lhm\") pod \"multus-admission-controller-857f4d67dd-x7hq6\" (UID: \"c07afc79-e943-4e79-93ed-8eedd0ade1bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.954378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"collect-profiles-29501880-x6pjp\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.954378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6x9\" (UniqueName: \"kubernetes.io/projected/ba766e4c-056f-4be6-a4b9-05592b641f87-kube-api-access-8c6x9\") pod \"control-plane-machine-set-operator-78cbb6b69f-xcpwg\" (UID: \"ba766e4c-056f-4be6-a4b9-05592b641f87\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.964635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.965979 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt"] Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.978548 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gskkj\" (UniqueName: \"kubernetes.io/projected/2f2ac3f6-ed20-4205-9dfd-ce6d76269c26-kube-api-access-gskkj\") pod \"machine-config-controller-84d6567774-bh4wr\" (UID: \"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.990067 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:41 crc kubenswrapper[5010]: E0203 10:04:41.990493 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.490478311 +0000 UTC m=+152.646454440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:41 crc kubenswrapper[5010]: I0203 10:04:41.993852 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxl5b\" (UniqueName: \"kubernetes.io/projected/d882e1bb-7ece-45ea-9e5e-0d23f162f06e-kube-api-access-nxl5b\") pod \"service-ca-9c57cc56f-c9t7q\" (UID: \"d882e1bb-7ece-45ea-9e5e-0d23f162f06e\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.038924 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6sx\" (UniqueName: \"kubernetes.io/projected/9cddf065-d958-4bf4-b5a8-67321cba2f67-kube-api-access-tv6sx\") pod \"catalog-operator-68c6474976-65mrf\" (UID: \"9cddf065-d958-4bf4-b5a8-67321cba2f67\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.054736 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77bnx\" (UniqueName: \"kubernetes.io/projected/98d0bd22-70a8-4496-9074-3251c15e5b59-kube-api-access-77bnx\") pod \"openshift-controller-manager-operator-756b6f6bc6-m76db\" (UID: \"98d0bd22-70a8-4496-9074-3251c15e5b59\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.056609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdssv\" (UniqueName: \"kubernetes.io/projected/58ae0ba7-4454-4bec-87ac-432b346ee643-kube-api-access-pdssv\") pod \"router-default-5444994796-whpdl\" (UID: \"58ae0ba7-4454-4bec-87ac-432b346ee643\") " pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.082477 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.089730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.093748 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.094146 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.594133908 +0000 UTC m=+152.750110037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.095054 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"marketplace-operator-79b997595-6kg4f\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.100071 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.103464 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpgf\" (UniqueName: \"kubernetes.io/projected/9fed3a51-8c05-46a7-8057-6839f70b2f22-kube-api-access-ftpgf\") pod \"machine-config-server-77jcb\" (UID: \"9fed3a51-8c05-46a7-8057-6839f70b2f22\") " pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.103758 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.122302 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpjj\" (UniqueName: \"kubernetes.io/projected/cde7673b-c4b1-4060-86cd-cac7120de9bf-kube-api-access-9zpjj\") pod \"ingress-operator-5b745b69d9-b78vw\" (UID: \"cde7673b-c4b1-4060-86cd-cac7120de9bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.126418 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.139981 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.144017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kh9\" (UniqueName: \"kubernetes.io/projected/ec11c4de-b7ae-4b50-ab95-20be670ab6e8-kube-api-access-72kh9\") pod \"openshift-apiserver-operator-796bbdcf4f-fs75k\" (UID: \"ec11c4de-b7ae-4b50-ab95-20be670ab6e8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.149816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.153899 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bh9q\" (UniqueName: \"kubernetes.io/projected/0c3f3f4e-122f-40b8-a3f1-d868a36640a1-kube-api-access-4bh9q\") pod \"migrator-59844c95c7-j4pcf\" (UID: \"0c3f3f4e-122f-40b8-a3f1-d868a36640a1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.156183 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6t4bv"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.168349 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.171381 5010 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.171449 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images podName:dc73dc6e-53ff-48b8-932e-d5aeb839f2dd nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.171429116 +0000 UTC m=+153.327405245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images") pod "machine-api-operator-5694c8668f-5mq4r" (UID: "dc73dc6e-53ff-48b8-932e-d5aeb839f2dd") : failed to sync configmap cache: timed out waiting for the condition Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.181734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7ztl2"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.197704 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlg8\" (UniqueName: \"kubernetes.io/projected/e9dc4ca7-8fe2-4479-989b-0cc98c651c96-kube-api-access-rrlg8\") pod \"service-ca-operator-777779d784-hwrkh\" (UID: \"e9dc4ca7-8fe2-4479-989b-0cc98c651c96\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:42 crc kubenswrapper[5010]: W0203 10:04:42.197817 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a475011_4dc0_4490_829a_8016f3b0e8a2.slice/crio-f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694 WatchSource:0}: Error finding container f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694: Status 404 returned error can't find the container with id f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694 Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.198280 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.198575 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.698559037 +0000 UTC m=+152.854535166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.198670 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.210898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2n5v\" (UniqueName: \"kubernetes.io/projected/b693a4b6-8aa6-489e-a797-fa486eab7443-kube-api-access-l2n5v\") pod \"packageserver-d55dfcdfc-5v56r\" (UID: \"b693a4b6-8aa6-489e-a797-fa486eab7443\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.214739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.223901 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.225768 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkdmn"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.233881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.234606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml6zh\" (UniqueName: \"kubernetes.io/projected/51fcb019-af4d-4f3d-b1b0-4b4e6761db7c-kube-api-access-ml6zh\") pod \"openshift-config-operator-7777fb866f-cp6s5\" (UID: \"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.252274 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqs8s\" (UniqueName: \"kubernetes.io/projected/1b8cbffa-cf1a-4658-bd1b-7e7323449bf3-kube-api-access-jqs8s\") pod \"machine-config-operator-74547568cd-zwvcg\" (UID: \"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.252750 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.254471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97kl8\" (UniqueName: \"kubernetes.io/projected/df4fd08a-dcc8-4d5c-95ad-9a3542df3233-kube-api-access-97kl8\") pod \"olm-operator-6b444d44fb-sgfk5\" (UID: \"df4fd08a-dcc8-4d5c-95ad-9a3542df3233\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.258124 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274052 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 09:59:41 +0000 UTC, rotation deadline is 2026-11-08 08:15:03.034348865 +0000 UTC Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274092 5010 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6670h10m20.760260291s for next certificate rotation Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.274096 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.281936 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.293198 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwxm6\" (UniqueName: \"kubernetes.io/projected/4ddcb32c-fe4a-4f24-bc77-d6bc56562d75-kube-api-access-bwxm6\") pod \"package-server-manager-789f6589d5-pnt99\" (UID: \"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.298081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcflf\" (UniqueName: \"kubernetes.io/projected/433ae711-459e-4627-83c1-0fecfe929c60-kube-api-access-jcflf\") pod \"apiserver-7bbb656c7d-snrzp\" (UID: \"433ae711-459e-4627-83c1-0fecfe929c60\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.301072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.301463 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.801450082 +0000 UTC m=+152.957426221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.302816 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.311327 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.318612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-77jcb" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.333122 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xxg\" (UniqueName: \"kubernetes.io/projected/6e12e505-3d35-4b3e-8015-9e2341d4791e-kube-api-access-j7xxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-68xdt\" (UID: \"6e12e505-3d35-4b3e-8015-9e2341d4791e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.337038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhrgt\" (UID: \"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.359515 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.365520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/effb39d8-ef30-45f3-bf93-b9dbb8de2475-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-2nxxl\" (UID: \"effb39d8-ef30-45f3-bf93-b9dbb8de2475\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.386508 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerStarted","Data":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.386558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerStarted","Data":"e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.388300 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"7c633523ca54953ccddd00a9ec430ee25964e92694e716a35026049bf91cbdb7"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.388981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerStarted","Data":"f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.395054 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" event={"ID":"2e96179c-7517-40d5-918f-1fc379e16fec","Type":"ContainerStarted","Data":"1b7d2cfbbe1ad8dcf31cb2fe132275f407edd85657f502e7daf7eb1bd7ce0447"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.395329 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.401970 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402196 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402239 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402278 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402498 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402644 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402677 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.402717 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.403058 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:42.90303712 +0000 UTC m=+153.059013249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.427056 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"b8bd4f5410b30f93f712b765a574503f90e387b8be3bfc0b76454a7e6cf020f2"} Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.427745 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.428055 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.455706 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.483694 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.496415 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.509981 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514137 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514399 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514426 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514440 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.514532 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.522431 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.522684 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.523038 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.536741 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.537686 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-mountpoint-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.548987 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.048968299 +0000 UTC m=+153.204944428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.563936 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-registration-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.585694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-socket-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.586206 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-csi-data-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.587193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-plugins-dir\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.587414 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.589002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d78816-3c67-4a17-8951-b605e971aa3b-config-volume\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.590649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d5af6be2-06e9-4fbc-a138-ada090853bc7-cert\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.591923 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.595566 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nb7d\" (UniqueName: \"kubernetes.io/projected/b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899-kube-api-access-2nb7d\") pod \"csi-hostpathplugin-f9lhg\" (UID: \"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899\") " pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.605415 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d78816-3c67-4a17-8951-b605e971aa3b-metrics-tls\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.607047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr725\" (UniqueName: \"kubernetes.io/projected/d5af6be2-06e9-4fbc-a138-ada090853bc7-kube-api-access-cr725\") pod \"ingress-canary-vxx8p\" (UID: \"d5af6be2-06e9-4fbc-a138-ada090853bc7\") " pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.611880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng6cr\" (UniqueName: \"kubernetes.io/projected/a3d78816-3c67-4a17-8951-b605e971aa3b-kube-api-access-ng6cr\") pod \"dns-default-m4jjq\" (UID: \"a3d78816-3c67-4a17-8951-b605e971aa3b\") " pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.637952 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.638237 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.138206677 +0000 UTC m=+153.294182806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.648481 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.648704 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.732013 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxx8p" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.739509 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.739873 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.239860457 +0000 UTC m=+153.395836586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.777594 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.799327 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg"] Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.851500 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.852111 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.352093098 +0000 UTC m=+153.508069227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.869433 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podStartSLOduration=127.86941018 podStartE2EDuration="2m7.86941018s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:42.86905914 +0000 UTC m=+153.025035269" watchObservedRunningTime="2026-02-03 10:04:42.86941018 +0000 UTC m=+153.025386309" Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.953010 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:42 crc kubenswrapper[5010]: E0203 10:04:42.953364 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.453349356 +0000 UTC m=+153.609325485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:42 crc kubenswrapper[5010]: I0203 10:04:42.992564 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" podStartSLOduration=127.99252343 podStartE2EDuration="2m7.99252343s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:42.990491822 +0000 UTC m=+153.146467951" watchObservedRunningTime="2026-02-03 10:04:42.99252343 +0000 UTC m=+153.148499559" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.025515 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sk5mk" podStartSLOduration=128.025500818 podStartE2EDuration="2m8.025500818s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.024496639 +0000 UTC m=+153.180472758" watchObservedRunningTime="2026-02-03 10:04:43.025500818 +0000 UTC m=+153.181476947" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.055018 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.055394 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.555380407 +0000 UTC m=+153.711356536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.105074 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podStartSLOduration=127.105037519 podStartE2EDuration="2m7.105037519s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.104662828 +0000 UTC m=+153.260638957" watchObservedRunningTime="2026-02-03 10:04:43.105037519 +0000 UTC m=+153.261013658" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.156355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.156626 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.656615726 +0000 UTC m=+153.812591855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.257615 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.257787 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.757756301 +0000 UTC m=+153.913732440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.258203 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.258337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.259170 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.759147531 +0000 UTC m=+153.915123660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.261519 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc73dc6e-53ff-48b8-932e-d5aeb839f2dd-images\") pod \"machine-api-operator-5694c8668f-5mq4r\" (UID: \"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.359654 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.361850 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.861812169 +0000 UTC m=+154.017788318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.461751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.462012 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:43.962001558 +0000 UTC m=+154.117977687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.467531 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.474794 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" event={"ID":"291724bc-0382-45d5-a089-356f8e04feb5","Type":"ContainerStarted","Data":"d09b6b5f9ac6bd18361a9402bf1dca7d0a94a47065f382b54d94d62e893c1442"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.475455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" event={"ID":"8f59fb23-ca1e-487d-a345-9eada8d1c7a8","Type":"ContainerStarted","Data":"d399b1c5a3f43e58fedc7b9a0a08aed708e61a8d74d46b2f172ad28150ef8e77"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.476875 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerStarted","Data":"68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.477773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"43e7a9a88e3189f6d03a24d82d6bf5772d80eb44d7e35ef9262d2307d16d642e"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.479410 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whpdl" event={"ID":"58ae0ba7-4454-4bec-87ac-432b346ee643","Type":"ContainerStarted","Data":"5dc9dea6bb83b5aa1a5dc6a32b24b5130b67e717def7180825c1220d656eae5f"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.487919 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-77jcb" event={"ID":"9fed3a51-8c05-46a7-8057-6839f70b2f22","Type":"ContainerStarted","Data":"ae890a1155114474ca855d42e61d125728f56e3c0bdaf5cc6c93ab0eda43bc46"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.489258 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wtcpj" podStartSLOduration=128.489241892 podStartE2EDuration="2m8.489241892s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:43.488311056 +0000 UTC m=+153.644287195" watchObservedRunningTime="2026-02-03 10:04:43.489241892 +0000 UTC m=+153.645218021" Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.494722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" event={"ID":"b075f5c7-f95f-4883-8d94-d1b64bc3c451","Type":"ContainerStarted","Data":"c226bd811c14d9f2781ff06c9170ca96b94d7443a0abb725c369539becb8c659"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.496109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" event={"ID":"ba766e4c-056f-4be6-a4b9-05592b641f87","Type":"ContainerStarted","Data":"b3bf5d30070b3fb5585bd35ae1024f758c653218b59c4571de8b3db3f4707cdb"} Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.512956 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh"] Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.562658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.563751 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.06373552 +0000 UTC m=+154.219711649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.665587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.667111 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.167072948 +0000 UTC m=+154.323049077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.768273 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.768708 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.268687317 +0000 UTC m=+154.424663456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.768962 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.770291 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.269358496 +0000 UTC m=+154.425334625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.877293 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.878045 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.378031166 +0000 UTC m=+154.534007295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.963535 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw"] Feb 03 10:04:43 crc kubenswrapper[5010]: I0203 10:04:43.978923 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:43 crc kubenswrapper[5010]: E0203 10:04:43.979327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.479315496 +0000 UTC m=+154.635291625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.080704 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.080994 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.580978485 +0000 UTC m=+154.736954614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.148988 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljpd5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.162278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.166812 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jvtp4"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.182090 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.182658 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.682647116 +0000 UTC m=+154.838623235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.285443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.285855 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.785835819 +0000 UTC m=+154.941811948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.386980 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.387653 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.887639244 +0000 UTC m=+155.043615373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.431054 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.446085 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.488067 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.490805 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:44.988546743 +0000 UTC m=+155.144522872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.511685 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" event={"ID":"2e96179c-7517-40d5-918f-1fc379e16fec","Type":"ContainerStarted","Data":"c06f71b8a3485feb4d4e37099aefa63f8ec2028b510e3bdf44f1b8c79a936b18"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.514345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" event={"ID":"8f59fb23-ca1e-487d-a345-9eada8d1c7a8","Type":"ContainerStarted","Data":"94a6318a94fadd61ac6fffc64c12d749005bd1f05159ad152119aa6c71e84f25"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.518581 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-77jcb" event={"ID":"9fed3a51-8c05-46a7-8057-6839f70b2f22","Type":"ContainerStarted","Data":"b4b7ea1d93ea8b711f0814bb0c671c1b519562e24821324d368aaf60782de6c2"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.528562 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" event={"ID":"4da6d2c9-755f-44e5-bab0-37cf60ee8378","Type":"ContainerStarted","Data":"9bf5ab8173b90fcf1fb1b6b6f0cee7ebde419e996a1afc9923b2156ea4ae9ec5"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.528605 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" event={"ID":"4da6d2c9-755f-44e5-bab0-37cf60ee8378","Type":"ContainerStarted","Data":"0a7b830c84f4c17e07abbcd752af7b6757f4601b9486d64923c938a9ea06cb7b"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.529338 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534697 5010 patch_prober.go:28] interesting pod/console-operator-58897d9998-ljpd5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534753 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" podUID="4da6d2c9-755f-44e5-bab0-37cf60ee8378" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.534993 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6t4bv" podStartSLOduration=129.534972333 podStartE2EDuration="2m9.534972333s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.534902251 +0000 UTC m=+154.690878370" watchObservedRunningTime="2026-02-03 10:04:44.534972333 +0000 UTC m=+154.690948452" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.538811 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" event={"ID":"ba766e4c-056f-4be6-a4b9-05592b641f87","Type":"ContainerStarted","Data":"63417935118a7c173d443e363c6575264227831b1a94822efbc7be942deeeeba"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.549338 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-77jcb" podStartSLOduration=5.54931603 podStartE2EDuration="5.54931603s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.549241328 +0000 UTC m=+154.705217457" watchObservedRunningTime="2026-02-03 10:04:44.54931603 +0000 UTC m=+154.705292159" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.551628 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"63fbbf9ee06318f4203063b75261e35739310c9dd1b8622a18a36ebd23fe5276"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.551675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"646277fedd17218abbc0cad255536f07c18ed6906549b680f077c3653992eba5"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.554454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerStarted","Data":"15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.578622 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" podStartSLOduration=129.578596503 podStartE2EDuration="2m9.578596503s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.574137896 +0000 UTC m=+154.730114035" watchObservedRunningTime="2026-02-03 10:04:44.578596503 +0000 UTC m=+154.734572632" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.586661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" event={"ID":"ad56317f-8d37-4d59-9abe-346b4340a30c","Type":"ContainerStarted","Data":"561515a2fa3c14b15007ba96e1540c1eea0059aab141c67d389b4f4a91a3b04d"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.589072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" event={"ID":"b693a4b6-8aa6-489e-a797-fa486eab7443","Type":"ContainerStarted","Data":"3c09bdc0fc16bf94389e2c826ab81bb9d2595ddca0db77387061ad8ac768b3fa"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.590417 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.591864 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bd2tr" podStartSLOduration=129.59184969 podStartE2EDuration="2m9.59184969s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.590301346 +0000 UTC m=+154.746277475" watchObservedRunningTime="2026-02-03 10:04:44.59184969 +0000 UTC m=+154.747825849" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.592846 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.092831468 +0000 UTC m=+155.248807597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.618722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerStarted","Data":"7bd5cd5437487cb168e25c92e0417529bd1becc82ce0d8d6889d660ccc99f901"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.628981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" event={"ID":"b075f5c7-f95f-4883-8d94-d1b64bc3c451","Type":"ContainerStarted","Data":"30c448f2a29441f24eefd6e7d24e4234e2550ba0183f51cdb0b88e4eb91d5b59"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.631302 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8qfbt" podStartSLOduration=129.631285101 podStartE2EDuration="2m9.631285101s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.618805186 +0000 UTC m=+154.774781315" watchObservedRunningTime="2026-02-03 10:04:44.631285101 +0000 UTC m=+154.787261230" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.641609 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xcpwg" podStartSLOduration=129.641592004 podStartE2EDuration="2m9.641592004s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.640600916 +0000 UTC m=+154.796577065" watchObservedRunningTime="2026-02-03 10:04:44.641592004 +0000 UTC m=+154.797568143" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.644533 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvtp4" event={"ID":"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d","Type":"ContainerStarted","Data":"3222ee61b2c693351e65e3c9805fb25da78814dc65c7e68669f689bfa569da6e"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.644747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jvtp4" event={"ID":"d8101cd0-5430-4786-bf8a-3d9c60ad1f7d","Type":"ContainerStarted","Data":"80515b9fecee374b5b46af16212dfe32a98caf42e9abceb1859afbb4272d8ccc"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.645089 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.660689 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" podStartSLOduration=129.660669416 podStartE2EDuration="2m9.660669416s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.660633375 +0000 UTC m=+154.816609504" watchObservedRunningTime="2026-02-03 10:04:44.660669416 +0000 UTC m=+154.816645565" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.661618 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.661653 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.662245 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" event={"ID":"291724bc-0382-45d5-a089-356f8e04feb5","Type":"ContainerStarted","Data":"a8e7175cf248ee167e4bce18e263051f06083ef8fce008268671d4d23c14b09d"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.669550 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"797b45f8a292343ce10b798cb89191b38f38d31166c410472811969dfabb16ff"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.669598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"b7b98936602d666e0476c5533daefd7d52973ccc05815593539ba02cb185939b"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.695886 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.697164 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.197145003 +0000 UTC m=+155.353121132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.697637 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerStarted","Data":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.698256 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.698943 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.701828 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.711715 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-whpdl" event={"ID":"58ae0ba7-4454-4bec-87ac-432b346ee643","Type":"ContainerStarted","Data":"a610fbebdc3ffa04f0473e337125d5909ac8c2a69e900a69a42c9394815ffb75"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.719130 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vxlln" podStartSLOduration=129.719106298 podStartE2EDuration="2m9.719106298s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.690195976 +0000 UTC m=+154.846172105" watchObservedRunningTime="2026-02-03 10:04:44.719106298 +0000 UTC m=+154.875082427" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.728934 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" event={"ID":"e9dc4ca7-8fe2-4479-989b-0cc98c651c96","Type":"ContainerStarted","Data":"90a8ac0aae794574cdb438e1ebde8fd7ef59d57a49f3d9f4465932e4b5db7b87"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.728985 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" event={"ID":"e9dc4ca7-8fe2-4479-989b-0cc98c651c96","Type":"ContainerStarted","Data":"11cb1b03743b8508d914607707efee374c51c7de656433e84775f38a25f8a0fc"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.729560 5010 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rkqd6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.729609 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.731179 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.734387 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jvtp4" podStartSLOduration=129.734369832 podStartE2EDuration="2m9.734369832s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.713273082 +0000 UTC m=+154.869249221" watchObservedRunningTime="2026-02-03 10:04:44.734369832 +0000 UTC m=+154.890345961" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.741122 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.745306 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.749125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"fec05ca2955a10df7039d4ef3ec746434bb3f8c492847ff25649e70ce1c6026c"} Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.749250 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" event={"ID":"45194a2a-320c-439d-9070-2c534070b7e4","Type":"ContainerStarted","Data":"373e3b089c1699c0575f908abd08461b75848ab282b800ce89ccc4ab65b90340"} Feb 03 10:04:44 crc kubenswrapper[5010]: W0203 10:04:44.770638 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b5592be_8839_4660_a4c4_ab662fc975eb.slice/crio-2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8 WatchSource:0}: Error finding container 2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8: Status 404 returned error can't find the container with id 2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8 Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.774625 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.781704 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podStartSLOduration=129.781679027 podStartE2EDuration="2m9.781679027s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.768026479 +0000 UTC m=+154.924002608" watchObservedRunningTime="2026-02-03 10:04:44.781679027 +0000 UTC m=+154.937655176" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.793800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.801076 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.806577 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.306558234 +0000 UTC m=+155.462534363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.809104 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m4jjq"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.850761 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.852126 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkdmn" podStartSLOduration=129.852105649 podStartE2EDuration="2m9.852105649s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.798359061 +0000 UTC m=+154.954335190" watchObservedRunningTime="2026-02-03 10:04:44.852105649 +0000 UTC m=+155.008081778" Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.878053 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9t7q"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.892544 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.918470 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.918792 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-whpdl" podStartSLOduration=129.918773514 podStartE2EDuration="2m9.918773514s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.898521659 +0000 UTC m=+155.054497788" watchObservedRunningTime="2026-02-03 10:04:44.918773514 +0000 UTC m=+155.074749643" Feb 03 10:04:44 crc kubenswrapper[5010]: E0203 10:04:44.918956 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.418937089 +0000 UTC m=+155.574913218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.920809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-x7hq6"] Feb 03 10:04:44 crc kubenswrapper[5010]: I0203 10:04:44.992028 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7ztl2" podStartSLOduration=129.992009617 podStartE2EDuration="2m9.992009617s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:44.981679023 +0000 UTC m=+155.137655152" watchObservedRunningTime="2026-02-03 10:04:44.992009617 +0000 UTC m=+155.147985746" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.014206 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwrkh" podStartSLOduration=129.014189497 podStartE2EDuration="2m9.014189497s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.007540638 +0000 UTC m=+155.163516767" watchObservedRunningTime="2026-02-03 10:04:45.014189497 +0000 UTC m=+155.170165626" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.023268 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.023682 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.523653836 +0000 UTC m=+155.679629965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.026134 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f9lhg"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.040983 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.049081 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.053824 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxx8p"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.074243 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5mq4r"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.102738 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.122760 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.123801 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.124080 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.624066761 +0000 UTC m=+155.780042890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.169998 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.199597 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.205466 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:45 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:45 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:45 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.205524 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.221235 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99"] Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.224660 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.224955 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.724942439 +0000 UTC m=+155.880918568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: W0203 10:04:45.258548 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ddcb32c_fe4a_4f24_bc77_d6bc56562d75.slice/crio-c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8 WatchSource:0}: Error finding container c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8: Status 404 returned error can't find the container with id c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8 Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.326875 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.327605 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.827584037 +0000 UTC m=+155.983560166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.429117 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.429536 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:45.929521116 +0000 UTC m=+156.085497245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.532037 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.533683 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.033663676 +0000 UTC m=+156.189639805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.637658 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.638002 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.137987382 +0000 UTC m=+156.293963511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.741780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.741954 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.241926998 +0000 UTC m=+156.397903127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.742116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.742485 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.242474353 +0000 UTC m=+156.398450482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.828820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxx8p" event={"ID":"d5af6be2-06e9-4fbc-a138-ada090853bc7","Type":"ContainerStarted","Data":"37a5bf791d71df2e3c42ff9e212a92bbb51e5c149897cdf18b9c17176163a868"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.829143 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxx8p" event={"ID":"d5af6be2-06e9-4fbc-a138-ada090853bc7","Type":"ContainerStarted","Data":"c99e2accf1e9f7bf057557faccf7afc119f9a4f625184d3dca3aa52fa55e9733"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.845913 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.846778 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.346762538 +0000 UTC m=+156.502738667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.856283 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vxx8p" podStartSLOduration=6.856259948 podStartE2EDuration="6.856259948s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.85173647 +0000 UTC m=+156.007712609" watchObservedRunningTime="2026-02-03 10:04:45.856259948 +0000 UTC m=+156.012236087" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.864060 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerStarted","Data":"f93b801ce0dca0c469ff09982034831af93b63cf34d684cee9bfe492088d1762"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.873834 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" event={"ID":"ec11c4de-b7ae-4b50-ab95-20be670ab6e8","Type":"ContainerStarted","Data":"b5f05f52af61cfe26c6aab58a8b996a878767b581abf72040ffaed251a9971df"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.873898 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" event={"ID":"ec11c4de-b7ae-4b50-ab95-20be670ab6e8","Type":"ContainerStarted","Data":"293f758cfef2035f88ec5bc09cf396c21e4fa0ec8021ab65013f44898c950667"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.896647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" event={"ID":"b693a4b6-8aa6-489e-a797-fa486eab7443","Type":"ContainerStarted","Data":"294d969e011425258ad251779e47b0c179a8f9497cdd382eaf2ca07a38e507c1"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.897646 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.915147 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" event={"ID":"cde7673b-c4b1-4060-86cd-cac7120de9bf","Type":"ContainerStarted","Data":"7fa36036bb193f801098485fb02f2ce8c3dab3f18e9cdc63b4415c5e8ec9f25d"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.946969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"3fd62a71078b1cb43038650754e215e13cac68a8cf4058f3c96fc00b7c1254e4"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.947014 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"c86b1c135cccd6d567565de23857d5e3ace4b68f753e7f63499d354b07f9ee1a"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.947705 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:45 crc kubenswrapper[5010]: E0203 10:04:45.949279 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.449266462 +0000 UTC m=+156.605242591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.963949 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" podStartSLOduration=130.963932379 podStartE2EDuration="2m10.963932379s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.962987403 +0000 UTC m=+156.118963532" watchObservedRunningTime="2026-02-03 10:04:45.963932379 +0000 UTC m=+156.119908508" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.964286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fs75k" podStartSLOduration=130.964282139 podStartE2EDuration="2m10.964282139s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.91296212 +0000 UTC m=+156.068938249" watchObservedRunningTime="2026-02-03 10:04:45.964282139 +0000 UTC m=+156.120258258" Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.995890 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" event={"ID":"9cddf065-d958-4bf4-b5a8-67321cba2f67","Type":"ContainerStarted","Data":"5be02c0cccee5cc5627eacb85d0058e31cee57c79bec998d1cc510ca71f853da"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.995946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" event={"ID":"9cddf065-d958-4bf4-b5a8-67321cba2f67","Type":"ContainerStarted","Data":"581875668ab6c17ac7b5b9be84de72b09eb74b7d738bbea9f96cccaeb2f81662"} Feb 03 10:04:45 crc kubenswrapper[5010]: I0203 10:04:45.997306 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.022842 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" event={"ID":"98d0bd22-70a8-4496-9074-3251c15e5b59","Type":"ContainerStarted","Data":"e81a392a89a2b15b43b8d2297fe5d7d2ca7f9ba8526d5464b4476a06ec368f96"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.029884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-b78vw" podStartSLOduration=131.029864244 podStartE2EDuration="2m11.029864244s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:45.999407498 +0000 UTC m=+156.155383647" watchObservedRunningTime="2026-02-03 10:04:46.029864244 +0000 UTC m=+156.185840383" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.030805 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" podStartSLOduration=131.03079908 podStartE2EDuration="2m11.03079908s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.027813526 +0000 UTC m=+156.183789665" watchObservedRunningTime="2026-02-03 10:04:46.03079908 +0000 UTC m=+156.186775220" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.031313 5010 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-65mrf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.031352 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" podUID="9cddf065-d958-4bf4-b5a8-67321cba2f67" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.044782 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" event={"ID":"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d","Type":"ContainerStarted","Data":"940ad3dd3fbb496db0baca4fe005c88ba3a8b5856d186e50e2353dc0d2659e9d"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.053986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.055164 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.555144693 +0000 UTC m=+156.711120822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067736 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"f12d5d4b66060063a6e5fbeb3be26c884e2d80745ae9248253b4aa0557708464"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067797 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"91eda359acc35811911d91d736b4f5dfa8bc7017b4342b54f6f3969cb4b1a75b"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.067813 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" event={"ID":"1b8cbffa-cf1a-4658-bd1b-7e7323449bf3","Type":"ContainerStarted","Data":"046e9483d7e5ef6675a2efa85fd05ebbbe9383866d5c894cf4ec997b25f9780a"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.071435 5010 generic.go:334] "Generic (PLEG): container finished" podID="51fcb019-af4d-4f3d-b1b0-4b4e6761db7c" containerID="44762100bff179e19e68fc7183f3f9b331a0d53e199426f84db2785f934fb945" exitCode=0 Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.071536 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerDied","Data":"44762100bff179e19e68fc7183f3f9b331a0d53e199426f84db2785f934fb945"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.087817 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" podStartSLOduration=131.087791351 podStartE2EDuration="2m11.087791351s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.053691421 +0000 UTC m=+156.209667550" watchObservedRunningTime="2026-02-03 10:04:46.087791351 +0000 UTC m=+156.243767490" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.104383 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"3981a613e4c2fbdad5b4c2bb31b2a507dd0406e8f86fd4c60620c9e72f9533d8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.112778 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" event={"ID":"2f2ac3f6-ed20-4205-9dfd-ce6d76269c26","Type":"ContainerStarted","Data":"2c0df893671116c4308e9f2a19b12ebca23d86f810b769a32c2ae536ba86f83f"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.119144 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zwvcg" podStartSLOduration=131.119126322 podStartE2EDuration="2m11.119126322s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.089891141 +0000 UTC m=+156.245867290" watchObservedRunningTime="2026-02-03 10:04:46.119126322 +0000 UTC m=+156.275102451" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.141034 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bh4wr" podStartSLOduration=131.141017994 podStartE2EDuration="2m11.141017994s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.138497052 +0000 UTC m=+156.294473181" watchObservedRunningTime="2026-02-03 10:04:46.141017994 +0000 UTC m=+156.296994113" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.155368 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" event={"ID":"d882e1bb-7ece-45ea-9e5e-0d23f162f06e","Type":"ContainerStarted","Data":"38f925966a34278b067557de50e9c41e692b377ad6073e9c8f649efcd66ae491"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.155411 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" event={"ID":"d882e1bb-7ece-45ea-9e5e-0d23f162f06e","Type":"ContainerStarted","Data":"4c9eef1ef6b1b398b5b6d439972963b0ced43649ecec034e904e5f1abffb1f27"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.160690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.163104 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.663091632 +0000 UTC m=+156.819067761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.180058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"79cdc3dc2ea16554708cddd4eb7c71a2fb3c85e6241fe9da5c9c1f0b122574b9"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.182400 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"df5c052b0ca9ef3d931104c92a268f401887f2d5870fe9cfa48661f36fa33c30"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.184394 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c9t7q" podStartSLOduration=130.184376987 podStartE2EDuration="2m10.184376987s" podCreationTimestamp="2026-02-03 10:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.182788052 +0000 UTC m=+156.338764181" watchObservedRunningTime="2026-02-03 10:04:46.184376987 +0000 UTC m=+156.340353116" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204036 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:46 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:46 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:46 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204094 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.204954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"f6bc7008aed2e1cc27b0e5157e43328f2571d68a98845128c53dc3f2ef0a9cab"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.256626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"c3eea924367a5036aaeefe59a51974e32e0154c319bec0b602fa06f78f2e5fb8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.261882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.262264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.76223473 +0000 UTC m=+156.918210859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.265910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerStarted","Data":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.265954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerStarted","Data":"2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.266491 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.275657 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"7d36f05199bd4236b19601f8bb5bb2c733a5becdb85190b74621819ca44ec567"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.282987 5010 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kg4f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.283053 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.292862 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podStartSLOduration=131.292842531 podStartE2EDuration="2m11.292842531s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.291686378 +0000 UTC m=+156.447662507" watchObservedRunningTime="2026-02-03 10:04:46.292842531 +0000 UTC m=+156.448818660" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.296938 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" event={"ID":"6e12e505-3d35-4b3e-8015-9e2341d4791e","Type":"ContainerStarted","Data":"a35aa10695f71d166b4c7d6d25f3126748c3f6a60fcfdf34ea00ea0d114b01ab"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.296990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" event={"ID":"6e12e505-3d35-4b3e-8015-9e2341d4791e","Type":"ContainerStarted","Data":"69be82670c15aaf4ca975cbfb52e590fc00dad12980abd48ab36b0ab7886dccf"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.308833 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" event={"ID":"effb39d8-ef30-45f3-bf93-b9dbb8de2475","Type":"ContainerStarted","Data":"ca3afc0cb6bc25e4c74c1c85c6d68d2d62b975ad765679a0d7c02e5221220d70"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.316561 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-68xdt" podStartSLOduration=131.316544975 podStartE2EDuration="2m11.316544975s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.315985319 +0000 UTC m=+156.471961448" watchObservedRunningTime="2026-02-03 10:04:46.316544975 +0000 UTC m=+156.472521104" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327092 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" event={"ID":"df4fd08a-dcc8-4d5c-95ad-9a3542df3233","Type":"ContainerStarted","Data":"022f5568881c0ea59ebfda6fd0b3b4d0681587700cd7b14ffdc63e70cb157b46"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327136 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.327147 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" event={"ID":"df4fd08a-dcc8-4d5c-95ad-9a3542df3233","Type":"ContainerStarted","Data":"8a28e2edc657a5048c58ba3b8cd63019dd256e0941b8bb0d428cde6696ecbb40"} Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.329711 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.329766 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.336557 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337173 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9lvbs" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337250 5010 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sgfk5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.337309 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" podUID="df4fd08a-dcc8-4d5c-95ad-9a3542df3233" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.344084 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" podStartSLOduration=131.344061137 podStartE2EDuration="2m11.344061137s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.341391841 +0000 UTC m=+156.497367990" watchObservedRunningTime="2026-02-03 10:04:46.344061137 +0000 UTC m=+156.500037266" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.364139 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.365555 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.865540688 +0000 UTC m=+157.021516817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.391683 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.391743 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.399952 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" podStartSLOduration=131.399937595 podStartE2EDuration="2m11.399937595s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:46.398066482 +0000 UTC m=+156.554042621" watchObservedRunningTime="2026-02-03 10:04:46.399937595 +0000 UTC m=+156.555913724" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.465183 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.466552 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:46.966538489 +0000 UTC m=+157.122514618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.548537 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ljpd5" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.572058 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.572479 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.072463331 +0000 UTC m=+157.228439460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.673535 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.673876 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.173843353 +0000 UTC m=+157.329819482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.674043 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.674453 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.1744399 +0000 UTC m=+157.330416029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.774892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.775467 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.275450682 +0000 UTC m=+157.431426811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.876949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.877325 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.377310708 +0000 UTC m=+157.533286837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.898366 5010 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5v56r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.898439 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" podUID="b693a4b6-8aa6-489e-a797-fa486eab7443" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:04:46 crc kubenswrapper[5010]: I0203 10:04:46.977567 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:46 crc kubenswrapper[5010]: E0203 10:04:46.977843 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.477828556 +0000 UTC m=+157.633804685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.079183 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.079555 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.579536457 +0000 UTC m=+157.735512576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.180521 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.180830 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.680811317 +0000 UTC m=+157.836787446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.202895 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:47 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:47 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:47 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.202954 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.282311 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.282621 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.782609331 +0000 UTC m=+157.938585460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.332762 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" event={"ID":"51fcb019-af4d-4f3d-b1b0-4b4e6761db7c","Type":"ContainerStarted","Data":"3d5b0314dbf5f7aa34902e2f182fbe043e436b36cb6ed7a9cef2d51c643e7586"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.333022 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.336019 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-2nxxl" event={"ID":"effb39d8-ef30-45f3-bf93-b9dbb8de2475","Type":"ContainerStarted","Data":"59d2e6be15ba379b92c54da78afd9e360e0303aeed041e8422f8591b6facc1d5"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.347039 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" event={"ID":"0c3f3f4e-122f-40b8-a3f1-d868a36640a1","Type":"ContainerStarted","Data":"23c69c382437672d737ee1c9253d3b649d64f02967342da4e53a5491d6d11f41"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.348808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m76db" event={"ID":"98d0bd22-70a8-4496-9074-3251c15e5b59","Type":"ContainerStarted","Data":"cdf675738afd3e9673d7c8a3c2913d7c4bc0acfe2b768177dba47b892ee26961"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.350846 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" event={"ID":"f2eab9ad-fdb0-4f6e-b1a0-0974672a7b9d","Type":"ContainerStarted","Data":"d08219555fdd6f860a0a0c79c84a54c4b3e8a908b3af087bc85c670dc0d42cca"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352417 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"383a0977c6395c435f6bb1299748991ed3b67014a76c643016fe6b5a4e816b5f"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352517 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" event={"ID":"4ddcb32c-fe4a-4f24-bc77-d6bc56562d75","Type":"ContainerStarted","Data":"e5913507e44fa6d528e47da0f0114d9206e0ec497acfe02ae985ddd84c0403e9"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.352926 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.354529 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"f0df05e572c326ea9e0d57460e80d77f10d1a3c2b4d4095e934f18b8ec8a413b"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.354668 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" event={"ID":"c07afc79-e943-4e79-93ed-8eedd0ade1bc","Type":"ContainerStarted","Data":"69cfa19a166eae9cd879b7005d145b980c74b652e62753b8299ac40f360fdf1c"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.355851 5010 generic.go:334] "Generic (PLEG): container finished" podID="433ae711-459e-4627-83c1-0fecfe929c60" containerID="bf00e2dc0609d8f8edc0d28df9931c5f0a4f06db5d7656d44ecf648458c7ddb9" exitCode=0 Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.355964 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerDied","Data":"bf00e2dc0609d8f8edc0d28df9931c5f0a4f06db5d7656d44ecf648458c7ddb9"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.358448 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"ea0da8ac601491fc423c1f3ea9db2da711074561434d518de7c75c9e854318a2"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.365483 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m4jjq" event={"ID":"a3d78816-3c67-4a17-8951-b605e971aa3b","Type":"ContainerStarted","Data":"a91d03d2e43775e69573400698a0acd2d175d55e99844b3b5eafec60117cd010"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.365831 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.368363 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"2bc4721c936d5b0596015432afe46b59d8f2e781c92a4deae0330e775de3eb67"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.368476 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" event={"ID":"dc73dc6e-53ff-48b8-932e-d5aeb839f2dd","Type":"ContainerStarted","Data":"07d6436cf7500596fc6c1d939b7bc2ce20fb17332064138acadb1954b3034551"} Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.373125 5010 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kg4f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.373188 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.379134 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-65mrf" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.385946 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.386232 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.886198946 +0000 UTC m=+158.042175075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.386661 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.387109 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.887101172 +0000 UTC m=+158.043077301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.391504 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5v56r" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.400512 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" podStartSLOduration=132.400494472 podStartE2EDuration="2m12.400494472s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.35996295 +0000 UTC m=+157.515939079" watchObservedRunningTime="2026-02-03 10:04:47.400494472 +0000 UTC m=+157.556470601" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.401786 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" podStartSLOduration=132.401768149 podStartE2EDuration="2m12.401768149s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.398601579 +0000 UTC m=+157.554577708" watchObservedRunningTime="2026-02-03 10:04:47.401768149 +0000 UTC m=+157.557744278" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.403026 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgfk5" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.423575 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4pcf" podStartSLOduration=132.423558268 podStartE2EDuration="2m12.423558268s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.423353792 +0000 UTC m=+157.579329921" watchObservedRunningTime="2026-02-03 10:04:47.423558268 +0000 UTC m=+157.579534397" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.442016 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-x7hq6" podStartSLOduration=132.442001372 podStartE2EDuration="2m12.442001372s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.440692005 +0000 UTC m=+157.596668134" watchObservedRunningTime="2026-02-03 10:04:47.442001372 +0000 UTC m=+157.597977501" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.488321 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.489868 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:47.989848313 +0000 UTC m=+158.145824442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.501951 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhrgt" podStartSLOduration=132.501938647 podStartE2EDuration="2m12.501938647s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.501491324 +0000 UTC m=+157.657467463" watchObservedRunningTime="2026-02-03 10:04:47.501938647 +0000 UTC m=+157.657914766" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.540313 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m4jjq" podStartSLOduration=8.540299197 podStartE2EDuration="8.540299197s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.538729493 +0000 UTC m=+157.694705622" watchObservedRunningTime="2026-02-03 10:04:47.540299197 +0000 UTC m=+157.696275326" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.599185 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.599626 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.099611552 +0000 UTC m=+158.255587681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.621018 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5mq4r" podStartSLOduration=132.6209952 podStartE2EDuration="2m12.6209952s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:47.579192153 +0000 UTC m=+157.735168282" watchObservedRunningTime="2026-02-03 10:04:47.6209952 +0000 UTC m=+157.776971339" Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.699994 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.700372 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.200340316 +0000 UTC m=+158.356316445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.801758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.802099 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.302087979 +0000 UTC m=+158.458064108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:47 crc kubenswrapper[5010]: I0203 10:04:47.902818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:47 crc kubenswrapper[5010]: E0203 10:04:47.903438 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.40342321 +0000 UTC m=+158.559399339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.005281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.005688 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.505660227 +0000 UTC m=+158.661636406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.106890 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.107120 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.60707891 +0000 UTC m=+158.763055039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.107230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.107501 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.607489692 +0000 UTC m=+158.763465811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208271 5010 patch_prober.go:28] interesting pod/router-default-5444994796-whpdl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 10:04:48 crc kubenswrapper[5010]: [-]has-synced failed: reason withheld Feb 03 10:04:48 crc kubenswrapper[5010]: [+]process-running ok Feb 03 10:04:48 crc kubenswrapper[5010]: healthz check failed Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208332 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-whpdl" podUID="58ae0ba7-4454-4bec-87ac-432b346ee643" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.208854 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.209327 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.709307397 +0000 UTC m=+158.865283526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.209420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.209809 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.709795811 +0000 UTC m=+158.865771940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.307739 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.309228 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.309985 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.310188 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.810167834 +0000 UTC m=+158.966143963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.310420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.310670 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.810663868 +0000 UTC m=+158.966639987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.312093 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.330892 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.374667 5010 generic.go:334] "Generic (PLEG): container finished" podID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerID="15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c" exitCode=0 Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.374730 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerDied","Data":"15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.383406 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" event={"ID":"433ae711-459e-4627-83c1-0fecfe929c60","Type":"ContainerStarted","Data":"e1fad7219fde604ee1964cc2b115acc62f018b650d6a77feec226a4b418a2a60"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.384756 5010 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.399530 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"2f06a939b0376260061f39514a9ddf81f12b6b0eba4c4244aad7cf2ba24e07a8"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.399571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"cbd1c570c173ba7f69c1dd2787e702a3eaf115b6cfb85078992b81d0837d78ea"} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.411653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.411981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.412033 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.412089 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.412226 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:48.912194165 +0000 UTC m=+159.068170294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.423531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" podStartSLOduration=133.423511167 podStartE2EDuration="2m13.423511167s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:48.419208574 +0000 UTC m=+158.575184713" watchObservedRunningTime="2026-02-03 10:04:48.423511167 +0000 UTC m=+158.579487296" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.511518 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.512450 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513392 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.513999 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.514305 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.515595 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.516423 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.517716 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.017703935 +0000 UTC m=+159.173680064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.527629 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.558324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"community-operators-f8ldc\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.615904 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.616125 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.116109722 +0000 UTC m=+159.272085851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.628633 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.698378 5010 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T10:04:48.384789856Z","Handler":null,"Name":""} Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.710747 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.712231 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717495 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717763 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.717867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718104 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.718250 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: E0203 10:04:48.718393 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 10:04:49.21837892 +0000 UTC m=+159.374355049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-x857s" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.727131 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.779117 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"certified-operators-rhsmk\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.798638 5010 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.798684 5010 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.820754 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.820930 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.821038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.821062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.834489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.851352 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.897365 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.902161 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.911638 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921955 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.921982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.922025 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.922773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.923062 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944063 5010 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944107 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.944408 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.949114 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"community-operators-9nhlj\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:48 crc kubenswrapper[5010]: I0203 10:04:48.996476 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-x857s\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.022899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.022967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.023042 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.027113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124753 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124840 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.124946 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.125324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.125373 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.146203 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.149937 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"certified-operators-dgktg\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.150728 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b321403_09c3_4199_98ce_474deeea9d18.slice/crio-63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52 WatchSource:0}: Error finding container 63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52: Status 404 returned error can't find the container with id 63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.203065 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.204039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.207309 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-whpdl" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.229406 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.258649 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.269976 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d7a138_50ca_4706_b760_2fe5154b2796.slice/crio-1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a WatchSource:0}: Error finding container 1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a: Status 404 returned error can't find the container with id 1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.270933 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.415879 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" exitCode=0 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.416089 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.416266 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerStarted","Data":"9b3e23c6c17315ac65a0626a6f5dc6fcfc45753c23f65c38f8420f31fc344706"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.419831 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.423066 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" event={"ID":"b5475bfb-c3f0-4d16-a9ab-6bfa72f8f899","Type":"ContainerStarted","Data":"fc1be3f0c60688bf688144cf6e3149397c5618238d9ca0779dad8f429552e5d8"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424854 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" exitCode=0 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424917 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.424947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerStarted","Data":"63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.427389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerStarted","Data":"1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a"} Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.488233 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f9lhg" podStartSLOduration=10.488197197 podStartE2EDuration="10.488197197s" podCreationTimestamp="2026-02-03 10:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:49.46299746 +0000 UTC m=+159.618973599" watchObservedRunningTime="2026-02-03 10:04:49.488197197 +0000 UTC m=+159.644173326" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.541189 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.589446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:04:49 crc kubenswrapper[5010]: W0203 10:04:49.594113 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594e9304_c63f_4d73_bcad_5258c1ebdd6d.slice/crio-4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9 WatchSource:0}: Error finding container 4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9: Status 404 returned error can't find the container with id 4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9 Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.765649 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845148 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845335 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.845364 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") pod \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\" (UID: \"9b9c4aab-790c-4581-bfc2-ad1d7302c704\") " Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.846071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.851049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9" (OuterVolumeSpecName: "kube-api-access-qfsz9") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "kube-api-access-qfsz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.851245 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b9c4aab-790c-4581-bfc2-ad1d7302c704" (UID: "9b9c4aab-790c-4581-bfc2-ad1d7302c704"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946916 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b9c4aab-790c-4581-bfc2-ad1d7302c704-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946954 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsz9\" (UniqueName: \"kubernetes.io/projected/9b9c4aab-790c-4581-bfc2-ad1d7302c704-kube-api-access-qfsz9\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:49 crc kubenswrapper[5010]: I0203 10:04:49.946964 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b9c4aab-790c-4581-bfc2-ad1d7302c704-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432050 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerStarted","Data":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerStarted","Data":"4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.432145 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433385 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" event={"ID":"9b9c4aab-790c-4581-bfc2-ad1d7302c704","Type":"ContainerDied","Data":"68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433426 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68feaa08ed8d91769630ca032dc73a0d3797e1b08b8b7690cc25c9c07a16da2d" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.433396 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.448809 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" exitCode=0 Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.448911 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450776 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" exitCode=0 Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450871 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.450905 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerStarted","Data":"f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc"} Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.480472 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" podStartSLOduration=135.480451988 podStartE2EDuration="2m15.480451988s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:50.468688633 +0000 UTC m=+160.624664762" watchObservedRunningTime="2026-02-03 10:04:50.480451988 +0000 UTC m=+160.636428117" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.516401 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.520956 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:50 crc kubenswrapper[5010]: E0203 10:04:50.521203 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.521235 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.521363 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" containerName="collect-profiles" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.522429 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.529669 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.552393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658049 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.658149 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760693 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.760867 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.761311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.761345 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.802308 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"redhat-marketplace-w967c\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.848992 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.900089 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.901066 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:50 crc kubenswrapper[5010]: I0203 10:04:50.915682 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.070689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172765 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.172923 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.175366 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.196553 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"redhat-marketplace-rp7rd\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.236427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.253194 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:04:51 crc kubenswrapper[5010]: W0203 10:04:51.260143 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778b346c_f503_4364_9757_98c213d89edc.slice/crio-ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6 WatchSource:0}: Error finding container ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6: Status 404 returned error can't find the container with id ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6 Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.297445 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cp6s5" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.421403 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.422970 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.427610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.427779 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.432543 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.463896 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerStarted","Data":"ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6"} Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.500329 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.504976 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.511493 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.529184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578599 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578630 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578697 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.578764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.580689 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.580717 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.594996 5010 patch_prober.go:28] interesting pod/console-f9d7485db-wtcpj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.595052 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.680420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.680608 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681592 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.681725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682098 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682183 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.682556 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.712424 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"redhat-operators-5pgxf\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.721320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.738552 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:04:51 crc kubenswrapper[5010]: W0203 10:04:51.747121 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49f8db32_0c68_4c72_9aad_a02ce0c958aa.slice/crio-5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048 WatchSource:0}: Error finding container 5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048: Status 404 returned error can't find the container with id 5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048 Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.773560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.833614 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.917444 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.919176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.924645 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.991735 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.992277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:51 crc kubenswrapper[5010]: I0203 10:04:51.992357 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.090786 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.090841 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.091048 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.091094 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100428 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100628 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.100726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.101711 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.102974 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.137263 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"redhat-operators-vqqgt\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.202278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 10:04:52 crc kubenswrapper[5010]: W0203 10:04:52.263368 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4cbcc5a5_e7ab_4f45_932e_2a75b44a8918.slice/crio-c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02 WatchSource:0}: Error finding container c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02: Status 404 returned error can't find the container with id c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.273890 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.325690 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.497588 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" exitCode=0 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.497673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.556554 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerStarted","Data":"c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567882 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" exitCode=0 Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.567955 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerStarted","Data":"5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048"} Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.590334 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.590393 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.608576 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.608611 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:04:52 crc kubenswrapper[5010]: I0203 10:04:52.771475 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:04:52 crc kubenswrapper[5010]: W0203 10:04:52.797270 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb492ad_594e_4460_8a8b_3476a4a0ddfe.slice/crio-b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f WatchSource:0}: Error finding container b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f: Status 404 returned error can't find the container with id b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.615651 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.615718 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.616026 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"3ee4a0547eec3952db79e960939ddf437d022a2d426d7a0f64071f60145150ba"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.645025 5010 generic.go:334] "Generic (PLEG): container finished" podID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerID="b6ce30260b0537e23c72d3fbda2480ff591908c7f4893374556eb30d66802455" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.645121 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerDied","Data":"b6ce30260b0537e23c72d3fbda2480ff591908c7f4893374556eb30d66802455"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649365 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831" exitCode=0 Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649699 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.649761 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerStarted","Data":"b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f"} Feb 03 10:04:53 crc kubenswrapper[5010]: I0203 10:04:53.665645 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-snrzp" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.039415 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.118848 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.119866 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: E0203 10:04:55.120269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120287 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120379 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" containerName="pruner" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.120931 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.123085 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.127056 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.129665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203039 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") pod \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203280 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") pod \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\" (UID: \"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918\") " Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.203932 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" (UID: "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.215882 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918" (UID: "4cbcc5a5-e7ab-4f45-932e-2a75b44a8918"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.312982 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313647 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.313721 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cbcc5a5-e7ab-4f45-932e-2a75b44a8918-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415453 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415525 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.415620 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.454394 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.465224 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.715754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4cbcc5a5-e7ab-4f45-932e-2a75b44a8918","Type":"ContainerDied","Data":"c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02"} Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.716080 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c707b98492191932d0175e25d4e25f2fb1048f7ce0a1e4416bc5a04063fd6c02" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.715827 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 10:04:55 crc kubenswrapper[5010]: I0203 10:04:55.870579 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 10:04:55 crc kubenswrapper[5010]: W0203 10:04:55.906631 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb5f00703_7e5f_4c7b_85fe_ce7fb07b7431.slice/crio-2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b WatchSource:0}: Error finding container 2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b: Status 404 returned error can't find the container with id 2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b Feb 03 10:04:56 crc kubenswrapper[5010]: I0203 10:04:56.730678 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerStarted","Data":"2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b"} Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.742301 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerStarted","Data":"c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3"} Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.762904 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.7628874530000003 podStartE2EDuration="2.762887453s" podCreationTimestamp="2026-02-03 10:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:04:57.761409591 +0000 UTC m=+167.917385730" watchObservedRunningTime="2026-02-03 10:04:57.762887453 +0000 UTC m=+167.918863582" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.770359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.776351 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/081d0234-b506-49ff-81c9-c535f6e1c588-metrics-certs\") pod \"network-metrics-daemon-clvdz\" (UID: \"081d0234-b506-49ff-81c9-c535f6e1c588\") " pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:57 crc kubenswrapper[5010]: I0203 10:04:57.793571 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m4jjq" Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.017880 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clvdz" Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.718847 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clvdz"] Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.753362 5010 generic.go:334] "Generic (PLEG): container finished" podID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerID="c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3" exitCode=0 Feb 03 10:04:58 crc kubenswrapper[5010]: I0203 10:04:58.753401 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerDied","Data":"c9571ee18245dbd51cc88b9c5049e37b6b83a29ee3997cd7bbd419274e1211f3"} Feb 03 10:04:58 crc kubenswrapper[5010]: W0203 10:04:58.754895 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081d0234_b506_49ff_81c9_c535f6e1c588.slice/crio-13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e WatchSource:0}: Error finding container 13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e: Status 404 returned error can't find the container with id 13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e Feb 03 10:04:59 crc kubenswrapper[5010]: I0203 10:04:59.764133 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"ac9fdd2d1d1b165c1349b346bcc0d7a19010fb2fc0248e686441121ff3fe11b3"} Feb 03 10:04:59 crc kubenswrapper[5010]: I0203 10:04:59.764514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"13f48e6ab387ab1d95442d03eb875ad51364e131f58502ed226acd326e53d72e"} Feb 03 10:05:01 crc kubenswrapper[5010]: I0203 10:05:01.591010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:05:01 crc kubenswrapper[5010]: I0203 10:05:01.596701 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091193 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091279 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091621 5010 patch_prober.go:28] interesting pod/downloads-7954f5f757-jvtp4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 03 10:05:02 crc kubenswrapper[5010]: I0203 10:05:02.091708 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jvtp4" podUID="d8101cd0-5430-4786-bf8a-3d9c60ad1f7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 03 10:05:07 crc kubenswrapper[5010]: I0203 10:05:07.538264 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 10:05:09 crc kubenswrapper[5010]: I0203 10:05:09.276541 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.070097 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192637 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") pod \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192752 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" (UID: "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.192797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") pod \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\" (UID: \"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431\") " Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.193025 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.199319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" (UID: "b5f00703-7e5f-4c7b-85fe-ce7fb07b7431"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.294523 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f00703-7e5f-4c7b-85fe-ce7fb07b7431-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830624 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b5f00703-7e5f-4c7b-85fe-ce7fb07b7431","Type":"ContainerDied","Data":"2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b"} Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830671 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2204207b01823b33e27480642d42ad6ac24cd5512f2cf07c931779231850f28b" Feb 03 10:05:11 crc kubenswrapper[5010]: I0203 10:05:11.830741 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 10:05:12 crc kubenswrapper[5010]: I0203 10:05:12.099181 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jvtp4" Feb 03 10:05:16 crc kubenswrapper[5010]: I0203 10:05:16.390761 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:05:16 crc kubenswrapper[5010]: I0203 10:05:16.391132 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.118772 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.119518 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmj7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vqqgt_openshift-marketplace(bcb492ad-594e-4460-8a8b-3476a4a0ddfe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.121189 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.137945 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.138118 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndvzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5pgxf_openshift-marketplace(777b0b1e-96c3-4914-8b7b-d51186433cb7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:17 crc kubenswrapper[5010]: E0203 10:05:17.139935 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.407551 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.408952 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.500267 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.500476 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmkxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dgktg_openshift-marketplace(16b28bac-b8da-4fa7-8282-3b97ef4decac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:18 crc kubenswrapper[5010]: E0203 10:05:18.501857 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" Feb 03 10:05:19 crc kubenswrapper[5010]: E0203 10:05:19.946464 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.016638 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.016815 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2wnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9nhlj_openshift-marketplace(e7d7a138-50ca-4706-b760-2fe5154b2796): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.018008 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.029661 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.030459 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rkwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rhsmk_openshift-marketplace(6b321403-09c3-4199-98ce-474deeea9d18): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:20 crc kubenswrapper[5010]: E0203 10:05:20.031583 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.160126 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.160176 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.239550 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.239721 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgmtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rp7rd_openshift-marketplace(49f8db32-0c68-4c72-9aad-a02ce0c958aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.241097 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.281907 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.282026 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f8ldc_openshift-marketplace(5a09b802-00fe-4ff8-983e-58c495061478): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.283332 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.295380 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.295473 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw58w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w967c_openshift-marketplace(778b346c-f503-4364-9757-98c213d89edc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.296683 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" Feb 03 10:05:21 crc kubenswrapper[5010]: I0203 10:05:21.893491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clvdz" event={"ID":"081d0234-b506-49ff-81c9-c535f6e1c588","Type":"ContainerStarted","Data":"c28e6bed742dfead03b98be3eca12cec53662c93c11807c896c211e74fa98d69"} Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896331 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896346 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" Feb 03 10:05:21 crc kubenswrapper[5010]: E0203 10:05:21.896557 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" Feb 03 10:05:21 crc kubenswrapper[5010]: I0203 10:05:21.940081 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-clvdz" podStartSLOduration=166.940062777 podStartE2EDuration="2m46.940062777s" podCreationTimestamp="2026-02-03 10:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:21.936702501 +0000 UTC m=+192.092678630" watchObservedRunningTime="2026-02-03 10:05:21.940062777 +0000 UTC m=+192.096038926" Feb 03 10:05:22 crc kubenswrapper[5010]: I0203 10:05:22.489369 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pnt99" Feb 03 10:05:30 crc kubenswrapper[5010]: I0203 10:05:30.933010 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.938548 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" exitCode=0 Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.938622 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.940990 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513" exitCode=0 Feb 03 10:05:31 crc kubenswrapper[5010]: I0203 10:05:31.941020 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.700490 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:32 crc kubenswrapper[5010]: E0203 10:05:32.701048 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701070 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701190 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f00703-7e5f-4c7b-85fe-ce7fb07b7431" containerName="pruner" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.701645 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.703254 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.703652 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.709707 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.791998 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.792086 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893318 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893422 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.893448 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.912444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.947519 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerStarted","Data":"7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.949327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerStarted","Data":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.966179 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vqqgt" podStartSLOduration=3.20027069 podStartE2EDuration="41.966161171s" podCreationTimestamp="2026-02-03 10:04:51 +0000 UTC" firstStartedPulling="2026-02-03 10:04:53.661466586 +0000 UTC m=+163.817442715" lastFinishedPulling="2026-02-03 10:05:32.427357067 +0000 UTC m=+202.583333196" observedRunningTime="2026-02-03 10:05:32.962572432 +0000 UTC m=+203.118548571" watchObservedRunningTime="2026-02-03 10:05:32.966161171 +0000 UTC m=+203.122137300" Feb 03 10:05:32 crc kubenswrapper[5010]: I0203 10:05:32.984539 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pgxf" podStartSLOduration=3.185124851 podStartE2EDuration="41.984517984s" podCreationTimestamp="2026-02-03 10:04:51 +0000 UTC" firstStartedPulling="2026-02-03 10:04:53.619842473 +0000 UTC m=+163.775818592" lastFinishedPulling="2026-02-03 10:05:32.419235596 +0000 UTC m=+202.575211725" observedRunningTime="2026-02-03 10:05:32.979760336 +0000 UTC m=+203.135736485" watchObservedRunningTime="2026-02-03 10:05:32.984517984 +0000 UTC m=+203.140494123" Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.025784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.204046 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.957501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerStarted","Data":"52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd"} Feb 03 10:05:33 crc kubenswrapper[5010]: I0203 10:05:33.957838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerStarted","Data":"82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151"} Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.521366 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.5213477319999997 podStartE2EDuration="2.521347732s" podCreationTimestamp="2026-02-03 10:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:33.973954557 +0000 UTC m=+204.129930686" watchObservedRunningTime="2026-02-03 10:05:34.521347732 +0000 UTC m=+204.677323861" Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.965029 5010 generic.go:334] "Generic (PLEG): container finished" podID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerID="52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd" exitCode=0 Feb 03 10:05:34 crc kubenswrapper[5010]: I0203 10:05:34.965080 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerDied","Data":"52d09727f2737181bd5292c49f0a0cb1d6b02cc9ba3925b005189292d769e5fd"} Feb 03 10:05:35 crc kubenswrapper[5010]: I0203 10:05:35.973803 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" exitCode=0 Feb 03 10:05:35 crc kubenswrapper[5010]: I0203 10:05:35.973881 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.225192 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249080 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") pod \"81299ba1-c345-43b2-ac1b-78107f12ed8c\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81299ba1-c345-43b2-ac1b-78107f12ed8c" (UID: "81299ba1-c345-43b2-ac1b-78107f12ed8c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") pod \"81299ba1-c345-43b2-ac1b-78107f12ed8c\" (UID: \"81299ba1-c345-43b2-ac1b-78107f12ed8c\") " Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.249636 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81299ba1-c345-43b2-ac1b-78107f12ed8c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.253818 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81299ba1-c345-43b2-ac1b-78107f12ed8c" (UID: "81299ba1-c345-43b2-ac1b-78107f12ed8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.382388 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81299ba1-c345-43b2-ac1b-78107f12ed8c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.981294 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" exitCode=0 Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.981332 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.984286 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerStarted","Data":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985856 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"81299ba1-c345-43b2-ac1b-78107f12ed8c","Type":"ContainerDied","Data":"82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151"} Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985881 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82dec6b7308cef3963c8d4acc2aea78d67df12d2c0c84d234d23c8d27a34b151" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.985908 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.999605 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" exitCode=0 Feb 03 10:05:36 crc kubenswrapper[5010]: I0203 10:05:36.999754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e"} Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.005060 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" exitCode=0 Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.005153 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7"} Feb 03 10:05:37 crc kubenswrapper[5010]: I0203 10:05:37.028602 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgktg" podStartSLOduration=2.917250564 podStartE2EDuration="49.028583744s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:50.453560343 +0000 UTC m=+160.609536462" lastFinishedPulling="2026-02-03 10:05:36.564893513 +0000 UTC m=+206.720869642" observedRunningTime="2026-02-03 10:05:37.027134828 +0000 UTC m=+207.183110957" watchObservedRunningTime="2026-02-03 10:05:37.028583744 +0000 UTC m=+207.184559883" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.015947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerStarted","Data":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.034257 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp7rd" podStartSLOduration=3.177812653 podStartE2EDuration="49.034240989s" podCreationTimestamp="2026-02-03 10:04:50 +0000 UTC" firstStartedPulling="2026-02-03 10:04:52.573074852 +0000 UTC m=+162.729050981" lastFinishedPulling="2026-02-03 10:05:38.429503188 +0000 UTC m=+208.585479317" observedRunningTime="2026-02-03 10:05:39.032342902 +0000 UTC m=+209.188319041" watchObservedRunningTime="2026-02-03 10:05:39.034240989 +0000 UTC m=+209.190217118" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.230241 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.230635 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:39 crc kubenswrapper[5010]: I0203 10:05:39.982369 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101036 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:40 crc kubenswrapper[5010]: E0203 10:05:40.101241 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101252 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101361 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="81299ba1-c345-43b2-ac1b-78107f12ed8c" containerName="pruner" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.101691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.103876 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.103931 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.116744 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.232461 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333419 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333529 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.333649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.355348 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"installer-9-crc\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.417923 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:05:40 crc kubenswrapper[5010]: I0203 10:05:40.659560 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.037317 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerStarted","Data":"47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.040425 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" exitCode=0 Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.040477 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.045626 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" exitCode=0 Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.045692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.048647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerStarted","Data":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.050928 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerStarted","Data":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.100696 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nhlj" podStartSLOduration=3.7183660659999997 podStartE2EDuration="53.100682045s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:50.449967471 +0000 UTC m=+160.605943590" lastFinishedPulling="2026-02-03 10:05:39.83228344 +0000 UTC m=+209.988259569" observedRunningTime="2026-02-03 10:05:41.098306877 +0000 UTC m=+211.254283016" watchObservedRunningTime="2026-02-03 10:05:41.100682045 +0000 UTC m=+211.256658174" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.116629 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhsmk" podStartSLOduration=2.590496173 podStartE2EDuration="53.116608828s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:49.426707229 +0000 UTC m=+159.582683358" lastFinishedPulling="2026-02-03 10:05:39.952819884 +0000 UTC m=+210.108796013" observedRunningTime="2026-02-03 10:05:41.114171598 +0000 UTC m=+211.270147727" watchObservedRunningTime="2026-02-03 10:05:41.116608828 +0000 UTC m=+211.272584957" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.254315 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.254365 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.291402 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.835139 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.835425 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:41 crc kubenswrapper[5010]: I0203 10:05:41.890880 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.057677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerStarted","Data":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.059183 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerStarted","Data":"8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af"} Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.075111 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w967c" podStartSLOduration=3.140853491 podStartE2EDuration="52.075094106s" podCreationTimestamp="2026-02-03 10:04:50 +0000 UTC" firstStartedPulling="2026-02-03 10:04:52.512044927 +0000 UTC m=+162.668021046" lastFinishedPulling="2026-02-03 10:05:41.446285532 +0000 UTC m=+211.602261661" observedRunningTime="2026-02-03 10:05:42.073737343 +0000 UTC m=+212.229713472" watchObservedRunningTime="2026-02-03 10:05:42.075094106 +0000 UTC m=+212.231070235" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.110448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.132409 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.13239129 podStartE2EDuration="2.13239129s" podCreationTimestamp="2026-02-03 10:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:05:42.097198162 +0000 UTC m=+212.253174291" watchObservedRunningTime="2026-02-03 10:05:42.13239129 +0000 UTC m=+212.288367419" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.275126 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.275188 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:42 crc kubenswrapper[5010]: I0203 10:05:42.320313 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:43 crc kubenswrapper[5010]: I0203 10:05:43.118069 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:44 crc kubenswrapper[5010]: I0203 10:05:44.069944 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerStarted","Data":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.928245 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f8ldc" podStartSLOduration=3.746774883 podStartE2EDuration="57.928225135s" podCreationTimestamp="2026-02-03 10:04:48 +0000 UTC" firstStartedPulling="2026-02-03 10:04:49.419089892 +0000 UTC m=+159.575066021" lastFinishedPulling="2026-02-03 10:05:43.600540124 +0000 UTC m=+213.756516273" observedRunningTime="2026-02-03 10:05:44.089657103 +0000 UTC m=+214.245633262" watchObservedRunningTime="2026-02-03 10:05:45.928225135 +0000 UTC m=+216.084201274" Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.929618 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:45 crc kubenswrapper[5010]: I0203 10:05:45.929849 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vqqgt" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" containerID="cri-o://7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" gracePeriod=2 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.085036 5010 generic.go:334] "Generic (PLEG): container finished" podID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerID="7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" exitCode=0 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.085077 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907"} Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.266256 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.310852 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.310999 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.311053 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") pod \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\" (UID: \"bcb492ad-594e-4460-8a8b-3476a4a0ddfe\") " Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.312445 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities" (OuterVolumeSpecName: "utilities") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.317704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d" (OuterVolumeSpecName: "kube-api-access-kmj7d") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "kube-api-access-kmj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390410 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390472 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.390521 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.391054 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.391188 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" gracePeriod=600 Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.413154 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.413187 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmj7d\" (UniqueName: \"kubernetes.io/projected/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-kube-api-access-kmj7d\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.434737 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcb492ad-594e-4460-8a8b-3476a4a0ddfe" (UID: "bcb492ad-594e-4460-8a8b-3476a4a0ddfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:46 crc kubenswrapper[5010]: I0203 10:05:46.514569 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcb492ad-594e-4460-8a8b-3476a4a0ddfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.096482 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" exitCode=0 Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.096567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb"} Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vqqgt" event={"ID":"bcb492ad-594e-4460-8a8b-3476a4a0ddfe","Type":"ContainerDied","Data":"b03e103076d38aa5bbbd68150acf3238a80f5aa11d029cd0429d26318865532f"} Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100514 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vqqgt" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.100520 5010 scope.go:117] "RemoveContainer" containerID="7d30f3b060cc0d586383cb9de6a300c34ce671caf4098a60fda10d9a98201907" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.133466 5010 scope.go:117] "RemoveContainer" containerID="23d25d23b886bcc187c1b9cd3f31af42a2e9d0581c448b9f8d3e75f9a6276513" Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.140227 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.143437 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vqqgt"] Feb 03 10:05:47 crc kubenswrapper[5010]: I0203 10:05:47.149433 5010 scope.go:117] "RemoveContainer" containerID="e368cf1e860ceec201b26f8820d913ac5d90d18137dd55d145c59832181c9831" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.109458 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.512420 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" path="/var/lib/kubelet/pods/bcb492ad-594e-4460-8a8b-3476a4a0ddfe/volumes" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.630119 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.630238 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.677295 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.835791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.835859 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:48 crc kubenswrapper[5010]: I0203 10:05:48.932392 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.028137 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.028190 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.075787 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.154699 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.157461 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.161698 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:49 crc kubenswrapper[5010]: I0203 10:05:49.274469 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.735956 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.849424 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.850157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:50 crc kubenswrapper[5010]: I0203 10:05:50.916933 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.133377 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nhlj" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" containerID="cri-o://179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" gracePeriod=2 Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.190558 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.297662 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.328853 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.329062 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgktg" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" containerID="cri-o://fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" gracePeriod=2 Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.494288 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583650 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583722 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.583787 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") pod \"e7d7a138-50ca-4706-b760-2fe5154b2796\" (UID: \"e7d7a138-50ca-4706-b760-2fe5154b2796\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.585289 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities" (OuterVolumeSpecName: "utilities") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.592397 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb" (OuterVolumeSpecName: "kube-api-access-d2wnb") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "kube-api-access-d2wnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.635801 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.635986 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d7a138-50ca-4706-b760-2fe5154b2796" (UID: "e7d7a138-50ca-4706-b760-2fe5154b2796"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685259 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685531 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685570 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") pod \"16b28bac-b8da-4fa7-8282-3b97ef4decac\" (UID: \"16b28bac-b8da-4fa7-8282-3b97ef4decac\") " Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685805 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685817 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d7a138-50ca-4706-b760-2fe5154b2796-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.685828 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2wnb\" (UniqueName: \"kubernetes.io/projected/e7d7a138-50ca-4706-b760-2fe5154b2796-kube-api-access-d2wnb\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.686396 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities" (OuterVolumeSpecName: "utilities") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.689631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt" (OuterVolumeSpecName: "kube-api-access-jmkxt") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "kube-api-access-jmkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.730426 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16b28bac-b8da-4fa7-8282-3b97ef4decac" (UID: "16b28bac-b8da-4fa7-8282-3b97ef4decac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.786934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmkxt\" (UniqueName: \"kubernetes.io/projected/16b28bac-b8da-4fa7-8282-3b97ef4decac-kube-api-access-jmkxt\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.786987 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:51 crc kubenswrapper[5010]: I0203 10:05:51.787008 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16b28bac-b8da-4fa7-8282-3b97ef4decac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139411 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" exitCode=0 Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139487 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139495 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgktg" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgktg" event={"ID":"16b28bac-b8da-4fa7-8282-3b97ef4decac","Type":"ContainerDied","Data":"f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.139529 5010 scope.go:117] "RemoveContainer" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145549 5010 generic.go:334] "Generic (PLEG): container finished" podID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" exitCode=0 Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145590 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145629 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nhlj" event={"ID":"e7d7a138-50ca-4706-b760-2fe5154b2796","Type":"ContainerDied","Data":"1b0c23388be323142da658c9f60348ab9cd0cc51111e7de9f4e1bb46c8a6bc8a"} Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.145568 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nhlj" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.157716 5010 scope.go:117] "RemoveContainer" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.180147 5010 scope.go:117] "RemoveContainer" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.181631 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.188479 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgktg"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.197311 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.203336 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9nhlj"] Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.213097 5010 scope.go:117] "RemoveContainer" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.214747 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": container with ID starting with fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff not found: ID does not exist" containerID="fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.214802 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff"} err="failed to get container status \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": rpc error: code = NotFound desc = could not find container \"fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff\": container with ID starting with fde54f8285f3a8bdecb3c2fb970c15c3d672ab7757cd44de9366dd799bc0cfff not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.214835 5010 scope.go:117] "RemoveContainer" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.215770 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": container with ID starting with bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4 not found: ID does not exist" containerID="bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.215826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4"} err="failed to get container status \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": rpc error: code = NotFound desc = could not find container \"bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4\": container with ID starting with bcc654dbe8169a28cffacbe314417d4a4611832d125b611e388eb693549fa2c4 not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.215855 5010 scope.go:117] "RemoveContainer" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.216300 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": container with ID starting with 3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f not found: ID does not exist" containerID="3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.216344 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f"} err="failed to get container status \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": rpc error: code = NotFound desc = could not find container \"3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f\": container with ID starting with 3a76abe4c5364f44f09a54270bc240290cf286a9884d39d2982b2da16ddcac0f not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.216373 5010 scope.go:117] "RemoveContainer" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.227286 5010 scope.go:117] "RemoveContainer" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.240453 5010 scope.go:117] "RemoveContainer" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270053 5010 scope.go:117] "RemoveContainer" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270438 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": container with ID starting with 179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab not found: ID does not exist" containerID="179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270473 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab"} err="failed to get container status \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": rpc error: code = NotFound desc = could not find container \"179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab\": container with ID starting with 179680fa76d28d0014bffe9d7d1991e888e4df35ecde3cc94412f4ec3db320ab not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270495 5010 scope.go:117] "RemoveContainer" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270723 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b28bac_b8da_4fa7_8282_3b97ef4decac.slice/crio-f8067043c468ce02991a947f5558cbe6d87a64ec40b08e86c4e947e44eed14bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d7a138_50ca_4706_b760_2fe5154b2796.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b28bac_b8da_4fa7_8282_3b97ef4decac.slice\": RecentStats: unable to find data in memory cache]" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.270755 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": container with ID starting with 730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a not found: ID does not exist" containerID="730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270778 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a"} err="failed to get container status \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": rpc error: code = NotFound desc = could not find container \"730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a\": container with ID starting with 730f222e342318bae796254f04e4df63b050039401e8b81d0b3edfa6109b624a not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.270792 5010 scope.go:117] "RemoveContainer" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: E0203 10:05:52.271008 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": container with ID starting with 6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf not found: ID does not exist" containerID="6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.271031 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf"} err="failed to get container status \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": rpc error: code = NotFound desc = could not find container \"6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf\": container with ID starting with 6c34e521910561d744489bcc04d63bb60f01ae814df1e11ab8b27bfb522f2dcf not found: ID does not exist" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.511728 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" path="/var/lib/kubelet/pods/16b28bac-b8da-4fa7-8282-3b97ef4decac/volumes" Feb 03 10:05:52 crc kubenswrapper[5010]: I0203 10:05:52.512948 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" path="/var/lib/kubelet/pods/e7d7a138-50ca-4706-b760-2fe5154b2796/volumes" Feb 03 10:05:53 crc kubenswrapper[5010]: I0203 10:05:53.736351 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:53 crc kubenswrapper[5010]: I0203 10:05:53.736687 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp7rd" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" containerID="cri-o://435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" gracePeriod=2 Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.070258 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113476 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113532 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.113614 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") pod \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\" (UID: \"49f8db32-0c68-4c72-9aad-a02ce0c958aa\") " Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.114355 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities" (OuterVolumeSpecName: "utilities") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.118433 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk" (OuterVolumeSpecName: "kube-api-access-cgmtk") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "kube-api-access-cgmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.137135 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f8db32-0c68-4c72-9aad-a02ce0c958aa" (UID: "49f8db32-0c68-4c72-9aad-a02ce0c958aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.160978 5010 generic.go:334] "Generic (PLEG): container finished" podID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" exitCode=0 Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp7rd" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161493 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp7rd" event={"ID":"49f8db32-0c68-4c72-9aad-a02ce0c958aa","Type":"ContainerDied","Data":"5fb8735def162698d86190ccce3a51a4ca66746325003df2b81d78c40f569048"} Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.161530 5010 scope.go:117] "RemoveContainer" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.176118 5010 scope.go:117] "RemoveContainer" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.186161 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.190369 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp7rd"] Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.197545 5010 scope.go:117] "RemoveContainer" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210368 5010 scope.go:117] "RemoveContainer" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.210763 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": container with ID starting with 435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26 not found: ID does not exist" containerID="435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210795 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26"} err="failed to get container status \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": rpc error: code = NotFound desc = could not find container \"435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26\": container with ID starting with 435125e58ee9434cfff52dc00067ea1991087f4e727758e855e9d613565ddf26 not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.210819 5010 scope.go:117] "RemoveContainer" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.211109 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": container with ID starting with fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7 not found: ID does not exist" containerID="fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211138 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7"} err="failed to get container status \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": rpc error: code = NotFound desc = could not find container \"fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7\": container with ID starting with fe10503b93985181eb829a3f8a8e717bf9280acf1b8141cb971cdc624c555ee7 not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211149 5010 scope.go:117] "RemoveContainer" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: E0203 10:05:54.211399 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": container with ID starting with e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee not found: ID does not exist" containerID="e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.211420 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee"} err="failed to get container status \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": rpc error: code = NotFound desc = could not find container \"e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee\": container with ID starting with e70831de14dc76fe2d8c698ee95b71e39567c1e454abec34c9a4a5c30f4aa8ee not found: ID does not exist" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215415 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215454 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f8db32-0c68-4c72-9aad-a02ce0c958aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.215471 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmtk\" (UniqueName: \"kubernetes.io/projected/49f8db32-0c68-4c72-9aad-a02ce0c958aa-kube-api-access-cgmtk\") on node \"crc\" DevicePath \"\"" Feb 03 10:05:54 crc kubenswrapper[5010]: I0203 10:05:54.512665 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" path="/var/lib/kubelet/pods/49f8db32-0c68-4c72-9aad-a02ce0c958aa/volumes" Feb 03 10:06:01 crc kubenswrapper[5010]: I0203 10:06:01.279601 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.703696 5010 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704470 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704486 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704496 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704503 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704516 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704523 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704535 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704542 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704551 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704559 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704570 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704577 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704591 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704599 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704608 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704616 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="extract-utilities" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704626 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704635 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704647 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704654 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704667 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704674 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="extract-content" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.704685 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704692 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704811 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b28bac-b8da-4fa7-8282-3b97ef4decac" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704828 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb492ad-594e-4460-8a8b-3476a4a0ddfe" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704837 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f8db32-0c68-4c72-9aad-a02ce0c958aa" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.704849 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d7a138-50ca-4706-b760-2fe5154b2796" containerName="registry-server" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705159 5010 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705342 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705490 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705629 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705659 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705711 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705713 5010 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.705759 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" gracePeriod=15 Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706185 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706199 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706239 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706253 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706265 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706275 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706288 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706296 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706307 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706313 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706330 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706336 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706427 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706441 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706450 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706458 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706465 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 10:06:18 crc kubenswrapper[5010]: E0203 10:06:18.706548 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706554 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.706639 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.712172 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.745996 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746043 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746142 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746167 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746192 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746299 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.746328 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847146 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847256 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847295 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847328 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847340 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847357 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847427 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847507 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847401 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:18 crc kubenswrapper[5010]: I0203 10:06:18.847596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291225 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291648 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.291924 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292142 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292373 5010 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.292397 5010 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.292604 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="200ms" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.306424 5010 generic.go:334] "Generic (PLEG): container finished" podID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerID="8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.306496 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerDied","Data":"8235871772bfab300d8b3a5a6ad3309af90a9d4729dea3e53a02ffdbbd8677af"} Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.307171 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.309375 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311012 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311853 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311877 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311887 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" exitCode=0 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311895 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" exitCode=2 Feb 03 10:06:19 crc kubenswrapper[5010]: I0203 10:06:19.311926 5010 scope.go:117] "RemoveContainer" containerID="8fa046739638e19cb674bf38cedcce77ee1e0dd9414e5d8c6cc05f0cf988fb1b" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.493997 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="400ms" Feb 03 10:06:19 crc kubenswrapper[5010]: E0203 10:06:19.895760 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="800ms" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.320878 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.503589 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.556010 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.556595 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664860 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") pod \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\" (UID: \"7c4b0e53-f63d-4ccf-a718-389b959a66c4\") " Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.664968 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665048 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock" (OuterVolumeSpecName: "var-lock") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665390 5010 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.665428 5010 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c4b0e53-f63d-4ccf-a718-389b959a66c4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.670154 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c4b0e53-f63d-4ccf-a718-389b959a66c4" (UID: "7c4b0e53-f63d-4ccf-a718-389b959a66c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:06:20 crc kubenswrapper[5010]: E0203 10:06:20.696991 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="1.6s" Feb 03 10:06:20 crc kubenswrapper[5010]: I0203 10:06:20.777641 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b0e53-f63d-4ccf-a718-389b959a66c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.065153 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.066046 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.066828 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.067553 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080893 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080966 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.080992 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081083 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081162 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.081169 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181792 5010 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181826 5010 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.181838 5010 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.329564 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.329539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7c4b0e53-f63d-4ccf-a718-389b959a66c4","Type":"ContainerDied","Data":"47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c"} Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.330035 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e2fb47d49372688a6df246f47c04ec60321886600acbad24a608754f55694c" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.332851 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333631 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" exitCode=0 Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333702 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.333703 5010 scope.go:117] "RemoveContainer" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.348902 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349347 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349610 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.349914 5010 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.353335 5010 scope.go:117] "RemoveContainer" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.369874 5010 scope.go:117] "RemoveContainer" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.386826 5010 scope.go:117] "RemoveContainer" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.400407 5010 scope.go:117] "RemoveContainer" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.418923 5010 scope.go:117] "RemoveContainer" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437279 5010 scope.go:117] "RemoveContainer" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.437696 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": container with ID starting with 8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0 not found: ID does not exist" containerID="8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437726 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0"} err="failed to get container status \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": rpc error: code = NotFound desc = could not find container \"8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0\": container with ID starting with 8e9e8bf69058ada4b4f2d760f7dc622b56f39260d3fb7127345ff5cce8c364d0 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.437761 5010 scope.go:117] "RemoveContainer" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438171 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": container with ID starting with d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d not found: ID does not exist" containerID="d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438272 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d"} err="failed to get container status \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": rpc error: code = NotFound desc = could not find container \"d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d\": container with ID starting with d9cb13665138266f1bfa409e444ec7e684b9b9a470fcfc892356f18e4886197d not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438306 5010 scope.go:117] "RemoveContainer" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438628 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": container with ID starting with 93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5 not found: ID does not exist" containerID="93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438657 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5"} err="failed to get container status \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": rpc error: code = NotFound desc = could not find container \"93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5\": container with ID starting with 93ad24344d47256e67af6bb73481b8c64cc5e492a62546949cc8e767fe0508b5 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438671 5010 scope.go:117] "RemoveContainer" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.438948 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": container with ID starting with 2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6 not found: ID does not exist" containerID="2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.438979 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6"} err="failed to get container status \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": rpc error: code = NotFound desc = could not find container \"2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6\": container with ID starting with 2a2fd8e920d1eab038348c6382e3a21bd472dd027adbd95e7fa049f6a429b5e6 not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439001 5010 scope.go:117] "RemoveContainer" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.439334 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": container with ID starting with 15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a not found: ID does not exist" containerID="15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439381 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a"} err="failed to get container status \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": rpc error: code = NotFound desc = could not find container \"15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a\": container with ID starting with 15e7014b33f6e506e99c1e467e471bfb75abd5e4eaf7cec750d1568e67e9520a not found: ID does not exist" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439413 5010 scope.go:117] "RemoveContainer" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: E0203 10:06:21.439882 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": container with ID starting with c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709 not found: ID does not exist" containerID="c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709" Feb 03 10:06:21 crc kubenswrapper[5010]: I0203 10:06:21.439905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709"} err="failed to get container status \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": rpc error: code = NotFound desc = could not find container \"c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709\": container with ID starting with c0ad72d485475b4a190ca53268d7500eaf096ca8b62451291af2c9b982d61709 not found: ID does not exist" Feb 03 10:06:22 crc kubenswrapper[5010]: E0203 10:06:22.297800 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="3.2s" Feb 03 10:06:22 crc kubenswrapper[5010]: I0203 10:06:22.507531 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 10:06:22 crc kubenswrapper[5010]: E0203 10:06:22.595046 5010 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" volumeName="registry-storage" Feb 03 10:06:23 crc kubenswrapper[5010]: E0203 10:06:23.752510 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:23 crc kubenswrapper[5010]: I0203 10:06:23.753659 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:23 crc kubenswrapper[5010]: E0203 10:06:23.788265 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890b48febd4ee53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,LastTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.354929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5"} Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.355182 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"eceb1cc15ee7168b5595c5db18d300d855c0f2bb643dcd250feb96ade1e832e1"} Feb 03 10:06:24 crc kubenswrapper[5010]: E0203 10:06:24.355771 5010 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:06:24 crc kubenswrapper[5010]: I0203 10:06:24.355774 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:25 crc kubenswrapper[5010]: E0203 10:06:25.498917 5010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="6.4s" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.309016 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" containerID="cri-o://a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" gracePeriod=15 Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.635984 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.637265 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.637884 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651446 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651485 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651539 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651574 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651649 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651676 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651765 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651787 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651836 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651881 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.651931 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") pod \"5a475011-4dc0-4490-829a-8016f3b0e8a2\" (UID: \"5a475011-4dc0-4490-829a-8016f3b0e8a2\") " Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.652239 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653068 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653191 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.653406 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.656086 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.659262 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr" (OuterVolumeSpecName: "kube-api-access-vwhnr") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "kube-api-access-vwhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.659930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.660494 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.661179 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.662415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.663003 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.663323 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.664839 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.665180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5a475011-4dc0-4490-829a-8016f3b0e8a2" (UID: "5a475011-4dc0-4490-829a-8016f3b0e8a2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753508 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwhnr\" (UniqueName: \"kubernetes.io/projected/5a475011-4dc0-4490-829a-8016f3b0e8a2-kube-api-access-vwhnr\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753573 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753596 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753616 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753637 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753656 5010 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753680 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753719 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753815 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753835 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753898 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753918 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.753936 5010 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:26 crc kubenswrapper[5010]: I0203 10:06:26.754024 5010 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a475011-4dc0-4490-829a-8016f3b0e8a2-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378595 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" exitCode=0 Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378659 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerDied","Data":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378689 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" event={"ID":"5a475011-4dc0-4490-829a-8016f3b0e8a2","Type":"ContainerDied","Data":"f8f57db6b0062ed4b61ecab8e52afe31f6118dd660c843052c1d2ff893b91694"} Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378708 5010 scope.go:117] "RemoveContainer" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.378846 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.380002 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.380295 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.398866 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.400169 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.401283 5010 scope.go:117] "RemoveContainer" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: E0203 10:06:27.401821 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": container with ID starting with a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a not found: ID does not exist" containerID="a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a" Feb 03 10:06:27 crc kubenswrapper[5010]: I0203 10:06:27.401898 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a"} err="failed to get container status \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": rpc error: code = NotFound desc = could not find container \"a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a\": container with ID starting with a2f49a595dbe175fbfdc24c502099a3d936749e84c074b969104e5a1610a153a not found: ID does not exist" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.501637 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.502450 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.502958 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.518954 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.518993 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:29 crc kubenswrapper[5010]: E0203 10:06:29.519312 5010 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: I0203 10:06:29.519762 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:29 crc kubenswrapper[5010]: E0203 10:06:29.556617 5010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1890b48febd4ee53 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,LastTimestamp:2026-02-03 10:06:23.786528339 +0000 UTC m=+253.942504508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407384 5010 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284" exitCode=0 Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407518 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284"} Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407723 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5f4d9ba8915958723475d51778beb169ae52277f2ba92d70897a4962d74ca95"} Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.407989 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408003 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:30 crc kubenswrapper[5010]: E0203 10:06:30.408476 5010 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408485 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.408823 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.505963 5010 status_manager.go:851] "Failed to get status for pod" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" pod="openshift-authentication/oauth-openshift-558db77b4-rkqd6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rkqd6\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.507307 5010 status_manager.go:851] "Failed to get status for pod" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:30 crc kubenswrapper[5010]: I0203 10:06:30.507641 5010 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.58:6443: connect: connection refused" Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.422981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e329c5326b6873d342f471b4c611fb436b3273601897d8e76ca8103b2a975195"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423030 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81092245012bb637362380c436fbe24d363cd1e8683ab57b019b3091706a06cb"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423045 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca05718ba8490974414ad3e3834f1f837372bed44286db631e74b158eca5e888"} Feb 03 10:06:31 crc kubenswrapper[5010]: I0203 10:06:31.423058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a0a06412578de949f9ce10bb5bf1d6a63e59acc35e22482e168f9f133769da4"} Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431431 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e45bb0330e0cf83e4dc82a1b4fbd878697ef55826bdfdacc4ff20265b91488c"} Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431735 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431763 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:32 crc kubenswrapper[5010]: I0203 10:06:32.431790 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439253 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439741 5010 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624" exitCode=1 Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.439822 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624"} Feb 03 10:06:33 crc kubenswrapper[5010]: I0203 10:06:33.440196 5010 scope.go:117] "RemoveContainer" containerID="0c212bc94a790d52d8ff793d120139e9f33e940cd3661c5037e10ab5e8650624" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.448947 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.449327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5bec1cfba10ca1f56b68d49b130113cc5cdf2727ab40a1341de7e7c13a51daf4"} Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.520538 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.520923 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:34 crc kubenswrapper[5010]: I0203 10:06:34.527791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:35 crc kubenswrapper[5010]: I0203 10:06:35.738367 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.447199 5010 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.475998 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.476030 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.480337 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.494083 5010 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f83e6949-33d8-4005-aece-aaede1aac552\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:30Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0a06412578de949f9ce10bb5bf1d6a63e59acc35e22482e168f9f133769da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81092245012bb637362380c436fbe24d363cd1e8683ab57b019b3091706a06cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca05718ba8490974414ad3e3834f1f837372bed44286db631e74b158eca5e888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e45bb0330e0cf83e4dc82a1b4fbd878697ef55826bdfdacc4ff20265b91488c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329c5326b6873d342f471b4c611fb436b3273601897d8e76ca8103b2a975195\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T10:06:31Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://130984c15228b1645c70fac6a3ea0163329e7b05678ff09e7839201026621284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T10:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T10:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"f83e6949-33d8-4005-aece-aaede1aac552\": field is immutable" Feb 03 10:06:37 crc kubenswrapper[5010]: I0203 10:06:37.562562 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="39895606-8c63-4761-bd8f-01d17ba4215e" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.482313 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.482372 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:06:38 crc kubenswrapper[5010]: I0203 10:06:38.487381 5010 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="39895606-8c63-4761-bd8f-01d17ba4215e" Feb 03 10:06:40 crc kubenswrapper[5010]: I0203 10:06:40.536472 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:40 crc kubenswrapper[5010]: I0203 10:06:40.540383 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:43 crc kubenswrapper[5010]: I0203 10:06:43.952419 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.016302 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.027146 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 10:06:44 crc kubenswrapper[5010]: I0203 10:06:44.243120 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.495197 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.716634 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.716637 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 10:06:45 crc kubenswrapper[5010]: I0203 10:06:45.748758 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.502361 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.755610 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 10:06:46 crc kubenswrapper[5010]: I0203 10:06:46.841577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 10:06:47 crc kubenswrapper[5010]: I0203 10:06:47.406411 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 10:06:47 crc kubenswrapper[5010]: I0203 10:06:47.685640 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 10:06:48 crc kubenswrapper[5010]: I0203 10:06:48.603752 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.296353 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.312178 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.377854 5010 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.384650 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.401057 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.501597 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.669302 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 10:06:49 crc kubenswrapper[5010]: I0203 10:06:49.950392 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.019207 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.108646 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.257643 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.341479 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.417737 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.589515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 10:06:50 crc kubenswrapper[5010]: I0203 10:06:50.916196 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.042375 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.084511 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.418859 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.758461 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.853803 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.897941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.939726 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.942677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.955631 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 10:06:51 crc kubenswrapper[5010]: I0203 10:06:51.964606 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.081007 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.311723 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.351915 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.424143 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.565657 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.565656 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.645890 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.739582 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 10:06:52 crc kubenswrapper[5010]: I0203 10:06:52.842892 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.186863 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.191986 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.247649 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.327036 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.328693 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.428947 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.467588 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.511180 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.545040 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.677467 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.700139 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.830243 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:06:53 crc kubenswrapper[5010]: I0203 10:06:53.963853 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.011975 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.034584 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.083060 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.118205 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.181977 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.331373 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.347011 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.353941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.359118 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.387638 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.444624 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.460578 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.468029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.577687 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.608495 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.871356 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.954645 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:06:54 crc kubenswrapper[5010]: I0203 10:06:54.979187 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.053974 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.094735 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.183626 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.270733 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.308727 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.369331 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.397748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.454012 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.548054 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.577179 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.578620 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.593012 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.602741 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.845710 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.845801 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.847230 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.856571 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 10:06:55 crc kubenswrapper[5010]: I0203 10:06:55.872377 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.025429 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.079677 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.123374 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.161171 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.166795 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.175862 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.296080 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.388763 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.511383 5010 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.532926 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.534282 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.605525 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.651829 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.804559 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.805160 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.926084 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.984659 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 10:06:56 crc kubenswrapper[5010]: I0203 10:06:56.984775 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.036537 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.068052 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.188885 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.296073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.311151 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.393431 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.473192 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.600890 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.896200 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:06:57 crc kubenswrapper[5010]: I0203 10:06:57.982360 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.021953 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.024615 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.041536 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.048802 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.176748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.341283 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.381969 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.423858 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.424658 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.462026 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.485389 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.510245 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.514472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.741292 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.800516 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.879695 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 10:06:58 crc kubenswrapper[5010]: I0203 10:06:58.885819 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.007440 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.053419 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.072983 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.089087 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.089128 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.100910 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.194744 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.299372 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.326294 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.328275 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.371888 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.432283 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.455373 5010 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.544020 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.626956 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.750810 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.751692 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.768818 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.804295 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.841557 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.891331 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 10:06:59 crc kubenswrapper[5010]: I0203 10:06:59.894203 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.022271 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.064323 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.067547 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.103728 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.108271 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.172990 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.378767 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.477049 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.593515 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.620168 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.627668 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.651357 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.706848 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.788486 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.798093 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.824845 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.862293 5010 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866068 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-rkqd6"] Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866118 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-55896b6b9d-9qj5p"] Feb 03 10:07:00 crc kubenswrapper[5010]: E0203 10:07:00.866303 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866325 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: E0203 10:07:00.866336 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866354 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866591 5010 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866620 5010 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f83e6949-33d8-4005-aece-aaede1aac552" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4b0e53-f63d-4ccf-a718-389b959a66c4" containerName="installer" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.866968 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" containerName="oauth-openshift" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.867662 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.871091 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.871200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.872566 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876114 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876175 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.876323 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881272 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881297 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881358 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881307 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881396 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881538 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.881590 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.887890 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.893049 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.895266 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.895293 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.916195 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.916178181 podStartE2EDuration="23.916178181s" podCreationTimestamp="2026-02-03 10:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:00.913566341 +0000 UTC m=+291.069542470" watchObservedRunningTime="2026-02-03 10:07:00.916178181 +0000 UTC m=+291.072154310" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.947126 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 10:07:00 crc kubenswrapper[5010]: I0203 10:07:00.994980 5010 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019792 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019866 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019887 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019910 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019929 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.019979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020001 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020018 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020051 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.020071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121485 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121514 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121563 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121642 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121670 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121700 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121778 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.121803 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-policies\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122514 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed8954d4-a9be-4760-8944-4e7da0eadcab-audit-dir\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.122820 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.123038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-service-ca\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.124094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.126808 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-session\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.127259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-login\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.128663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.128907 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-router-certs\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.129116 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-error\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.129354 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.130470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.130729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed8954d4-a9be-4760-8944-4e7da0eadcab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.136860 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.138665 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2ct\" (UniqueName: \"kubernetes.io/projected/ed8954d4-a9be-4760-8944-4e7da0eadcab-kube-api-access-8b2ct\") pod \"oauth-openshift-55896b6b9d-9qj5p\" (UID: \"ed8954d4-a9be-4760-8944-4e7da0eadcab\") " pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.194690 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.203102 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.404591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55896b6b9d-9qj5p"] Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.462346 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.468437 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.520295 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.538941 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.599345 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" event={"ID":"ed8954d4-a9be-4760-8944-4e7da0eadcab","Type":"ContainerStarted","Data":"f11c9e27c0a8c5d17b1343cd4d162b3a3667b342949536f6b6607f8c8ae493dd"} Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.619407 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.771311 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.798074 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.834885 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.848607 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.937310 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 10:07:01 crc kubenswrapper[5010]: I0203 10:07:01.959274 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.036524 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.257662 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.318180 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.321601 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.356597 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.360866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.368439 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.495313 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.498336 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.508113 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a475011-4dc0-4490-829a-8016f3b0e8a2" path="/var/lib/kubelet/pods/5a475011-4dc0-4490-829a-8016f3b0e8a2/volumes" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.544149 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.604754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" event={"ID":"ed8954d4-a9be-4760-8944-4e7da0eadcab","Type":"ContainerStarted","Data":"7b9f6fe6dd230da7bd7852cf9c0b7300054690be522e49d93983867325faf008"} Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.605023 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.610119 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.623326 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55896b6b9d-9qj5p" podStartSLOduration=61.623307987 podStartE2EDuration="1m1.623307987s" podCreationTimestamp="2026-02-03 10:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:02.623145733 +0000 UTC m=+292.779121872" watchObservedRunningTime="2026-02-03 10:07:02.623307987 +0000 UTC m=+292.779284116" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.767231 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.853477 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.882866 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 10:07:02 crc kubenswrapper[5010]: I0203 10:07:02.973342 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.117109 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.358874 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.362181 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.407731 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.546977 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.608624 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.754319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 10:07:03 crc kubenswrapper[5010]: I0203 10:07:03.803854 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.071445 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.108866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.159187 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.395120 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.562516 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.639629 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.715252 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.849163 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 10:07:04 crc kubenswrapper[5010]: I0203 10:07:04.878157 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.302963 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.396092 5010 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.435515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.750416 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:05 crc kubenswrapper[5010]: I0203 10:07:05.956259 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.188780 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.231648 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 10:07:06 crc kubenswrapper[5010]: I0203 10:07:06.888794 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:07 crc kubenswrapper[5010]: I0203 10:07:07.513668 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 10:07:08 crc kubenswrapper[5010]: I0203 10:07:08.058674 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 10:07:10 crc kubenswrapper[5010]: I0203 10:07:10.295427 5010 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 10:07:11 crc kubenswrapper[5010]: I0203 10:07:11.127851 5010 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 10:07:11 crc kubenswrapper[5010]: I0203 10:07:11.128508 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" gracePeriod=5 Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.677260 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.678411 5010 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" exitCode=137 Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.678487 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eceb1cc15ee7168b5595c5db18d300d855c0f2bb643dcd250feb96ade1e832e1" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.693999 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.694083 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735061 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735125 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735156 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735183 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735259 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735322 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735377 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735374 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735559 5010 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735578 5010 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735592 5010 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.735604 5010 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.742498 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:07:16 crc kubenswrapper[5010]: I0203 10:07:16.837400 5010 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:17 crc kubenswrapper[5010]: I0203 10:07:17.682319 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 10:07:18 crc kubenswrapper[5010]: I0203 10:07:18.511691 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.401918 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.402699 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" containerID="cri-o://9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" gracePeriod=30 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.499895 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.500138 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" containerID="cri-o://815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" gracePeriod=30 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741837 5010 generic.go:334] "Generic (PLEG): container finished" podID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerID="9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" exitCode=0 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerDied","Data":"9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" event={"ID":"e27ae235-3c1c-4ee0-85b6-a53477e335e5","Type":"ContainerDied","Data":"8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.741955 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b56ac9ef9b68e183b29025350e04525ecb7ee2dc150d387fdfd29f29126ba81" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.743745 5010 generic.go:334] "Generic (PLEG): container finished" podID="61153282-2bd6-4bbf-a04a-76909b13f961" containerID="815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" exitCode=0 Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.743779 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerDied","Data":"815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c"} Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.745473 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.805998 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902147 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902300 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902347 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903384 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") pod \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\" (UID: \"e27ae235-3c1c-4ee0-85b6-a53477e335e5\") " Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.902983 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.903919 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.904471 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config" (OuterVolumeSpecName: "config") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.907199 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:26 crc kubenswrapper[5010]: I0203 10:07:26.907333 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n" (OuterVolumeSpecName: "kube-api-access-lzx2n") pod "e27ae235-3c1c-4ee0-85b6-a53477e335e5" (UID: "e27ae235-3c1c-4ee0-85b6-a53477e335e5"). InnerVolumeSpecName "kube-api-access-lzx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004075 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004137 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004248 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") pod \"61153282-2bd6-4bbf-a04a-76909b13f961\" (UID: \"61153282-2bd6-4bbf-a04a-76909b13f961\") " Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004517 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e27ae235-3c1c-4ee0-85b6-a53477e335e5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004535 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzx2n\" (UniqueName: \"kubernetes.io/projected/e27ae235-3c1c-4ee0-85b6-a53477e335e5-kube-api-access-lzx2n\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004548 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004561 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.004573 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e27ae235-3c1c-4ee0-85b6-a53477e335e5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.005039 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca" (OuterVolumeSpecName: "client-ca") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.005238 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config" (OuterVolumeSpecName: "config") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.007980 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.008272 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj" (OuterVolumeSpecName: "kube-api-access-wzqxj") pod "61153282-2bd6-4bbf-a04a-76909b13f961" (UID: "61153282-2bd6-4bbf-a04a-76909b13f961"). InnerVolumeSpecName "kube-api-access-wzqxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105318 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105359 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61153282-2bd6-4bbf-a04a-76909b13f961-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105370 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61153282-2bd6-4bbf-a04a-76909b13f961-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.105382 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqxj\" (UniqueName: \"kubernetes.io/projected/61153282-2bd6-4bbf-a04a-76909b13f961-kube-api-access-wzqxj\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752828 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc7dd" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752851 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" event={"ID":"61153282-2bd6-4bbf-a04a-76909b13f961","Type":"ContainerDied","Data":"de6014a42b56ede90300ddd6921cb59d6826d8880dbadae1fda87913014c2ca8"} Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752874 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.752911 5010 scope.go:117] "RemoveContainer" containerID="815c9a092d4240f3fb7d7c856a7d1fe04289a8f354f5c335fb93d5de0abf1f2c" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.789888 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.792849 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc7dd"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.802079 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.805738 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgmq6"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976450 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.976790 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976819 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.976839 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.976851 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: E0203 10:07:27.977148 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977316 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977696 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977740 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" containerName="controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.977762 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" containerName="route-controller-manager" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.978346 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.981871 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982377 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982716 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.982983 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983511 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983818 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.983937 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.984785 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990148 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990320 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990534 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990763 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.990849 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.991068 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.999508 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:27 crc kubenswrapper[5010]: I0203 10:07:27.999848 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.008705 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123684 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123707 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123741 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123764 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123785 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123805 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.123820 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.224634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225612 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.225838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226007 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226096 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226130 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.226156 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.227374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.227680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.228707 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.228924 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.234996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.238160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.248722 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"route-controller-manager-df4484484-vwxdt\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.248862 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"controller-manager-556878559b-xhhgj\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.305941 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.321730 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.510291 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61153282-2bd6-4bbf-a04a-76909b13f961" path="/var/lib/kubelet/pods/61153282-2bd6-4bbf-a04a-76909b13f961/volumes" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.511295 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27ae235-3c1c-4ee0-85b6-a53477e335e5" path="/var/lib/kubelet/pods/e27ae235-3c1c-4ee0-85b6-a53477e335e5/volumes" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.539761 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.590606 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:28 crc kubenswrapper[5010]: W0203 10:07:28.596356 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803625af_3cec_45c4_98a2_08da45692f88.slice/crio-00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f WatchSource:0}: Error finding container 00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f: Status 404 returned error can't find the container with id 00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760178 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerStarted","Data":"a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760503 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.760519 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerStarted","Data":"00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerStarted","Data":"8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761775 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerStarted","Data":"c78d78b59354c76497fffece8dac6bbcd201b1d7431edbd2dda46259787581a3"} Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.761900 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.762285 5010 patch_prober.go:28] interesting pod/route-controller-manager-df4484484-vwxdt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.762326 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.763391 5010 patch_prober.go:28] interesting pod/controller-manager-556878559b-xhhgj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.763428 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.775093 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podStartSLOduration=2.775078953 podStartE2EDuration="2.775078953s" podCreationTimestamp="2026-02-03 10:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:28.774109027 +0000 UTC m=+318.930085156" watchObservedRunningTime="2026-02-03 10:07:28.775078953 +0000 UTC m=+318.931055082" Feb 03 10:07:28 crc kubenswrapper[5010]: I0203 10:07:28.791782 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podStartSLOduration=2.791766168 podStartE2EDuration="2.791766168s" podCreationTimestamp="2026-02-03 10:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:28.791530161 +0000 UTC m=+318.947506290" watchObservedRunningTime="2026-02-03 10:07:28.791766168 +0000 UTC m=+318.947742297" Feb 03 10:07:29 crc kubenswrapper[5010]: I0203 10:07:29.771750 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:29 crc kubenswrapper[5010]: I0203 10:07:29.772507 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.575016 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.575572 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" containerID="cri-o://8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" gracePeriod=30 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.598726 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.598926 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" containerID="cri-o://a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" gracePeriod=30 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.787818 5010 generic.go:334] "Generic (PLEG): container finished" podID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerID="8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" exitCode=0 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.787906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerDied","Data":"8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c"} Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.791927 5010 generic.go:334] "Generic (PLEG): container finished" podID="803625af-3cec-45c4-98a2-08da45692f88" containerID="a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" exitCode=0 Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.791974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerDied","Data":"a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca"} Feb 03 10:07:32 crc kubenswrapper[5010]: I0203 10:07:32.993789 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087091 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087144 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087175 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.087271 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") pod \"803625af-3cec-45c4-98a2-08da45692f88\" (UID: \"803625af-3cec-45c4-98a2-08da45692f88\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.088364 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config" (OuterVolumeSpecName: "config") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.091141 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca" (OuterVolumeSpecName: "client-ca") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.096466 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv" (OuterVolumeSpecName: "kube-api-access-jfglv") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "kube-api-access-jfglv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.096785 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "803625af-3cec-45c4-98a2-08da45692f88" (UID: "803625af-3cec-45c4-98a2-08da45692f88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.145741 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187927 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187959 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.187982 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") pod \"30cf9a28-0b6e-4cf8-b513-fa463560e886\" (UID: \"30cf9a28-0b6e-4cf8-b513-fa463560e886\") " Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188125 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803625af-3cec-45c4-98a2-08da45692f88-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188137 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfglv\" (UniqueName: \"kubernetes.io/projected/803625af-3cec-45c4-98a2-08da45692f88-kube-api-access-jfglv\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188146 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188154 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803625af-3cec-45c4-98a2-08da45692f88-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.188628 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.189171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config" (OuterVolumeSpecName: "config") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.189508 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca" (OuterVolumeSpecName: "client-ca") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.192408 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt" (OuterVolumeSpecName: "kube-api-access-d2rkt") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "kube-api-access-d2rkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.197379 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30cf9a28-0b6e-4cf8-b513-fa463560e886" (UID: "30cf9a28-0b6e-4cf8-b513-fa463560e886"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289160 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30cf9a28-0b6e-4cf8-b513-fa463560e886-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289189 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rkt\" (UniqueName: \"kubernetes.io/projected/30cf9a28-0b6e-4cf8-b513-fa463560e886-kube-api-access-d2rkt\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289199 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289210 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.289230 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30cf9a28-0b6e-4cf8-b513-fa463560e886-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.797923 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.797916 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt" event={"ID":"803625af-3cec-45c4-98a2-08da45692f88","Type":"ContainerDied","Data":"00f89a3fa11f161985f39a10a6a8ae129cff868d26ae98a480538d6e0b0ca29f"} Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.798263 5010 scope.go:117] "RemoveContainer" containerID="a2e7d9b77453479a86c7ec92a3e914d2ca2b35e41ce40278a55f958d04f671ca" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.800141 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" event={"ID":"30cf9a28-0b6e-4cf8-b513-fa463560e886","Type":"ContainerDied","Data":"c78d78b59354c76497fffece8dac6bbcd201b1d7431edbd2dda46259787581a3"} Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.800194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556878559b-xhhgj" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.815565 5010 scope.go:117] "RemoveContainer" containerID="8353a44a5500e444d2337a68b2c4782198c30ca7befd61a0c2d9c52c3869471c" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.829484 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.836175 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-df4484484-vwxdt"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.841626 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.844860 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-556878559b-xhhgj"] Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982143 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:33 crc kubenswrapper[5010]: E0203 10:07:33.982415 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982429 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: E0203 10:07:33.982445 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982452 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982559 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" containerName="controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982568 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="803625af-3cec-45c4-98a2-08da45692f88" containerName="route-controller-manager" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.982912 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.984781 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:33 crc kubenswrapper[5010]: I0203 10:07:33.984805 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.000090 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.000694 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002391 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.002745 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.003146 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.004048 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.004534 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.005311 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.013802 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014255 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014450 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014573 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014685 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.014631 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.016428 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.021720 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103060 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103198 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103222 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103491 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103534 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.103694 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.104191 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.104520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.105596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.122676 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.125431 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"controller-manager-5d5bd7d9c6-cjf6h\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.204916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.205019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.205933 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.206560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.210881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.224062 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"route-controller-manager-bc8d5fc56-6dhjw\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.315168 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.337265 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.510074 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cf9a28-0b6e-4cf8-b513-fa463560e886" path="/var/lib/kubelet/pods/30cf9a28-0b6e-4cf8-b513-fa463560e886/volumes" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.510806 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803625af-3cec-45c4-98a2-08da45692f88" path="/var/lib/kubelet/pods/803625af-3cec-45c4-98a2-08da45692f88/volumes" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.551127 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.808524 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerStarted","Data":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.808797 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerStarted","Data":"1c4e6d1216d15486952944a34883d2752e446df691ee61abdfb4affcfd9e809d"} Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.809846 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.811162 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:34 crc kubenswrapper[5010]: W0203 10:07:34.820451 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13d6ce0_d473_4529_89a4_2e7b8ad864b3.slice/crio-39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d WatchSource:0}: Error finding container 39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d: Status 404 returned error can't find the container with id 39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.821230 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:34 crc kubenswrapper[5010]: I0203 10:07:34.854655 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" podStartSLOduration=2.854638204 podStartE2EDuration="2.854638204s" podCreationTimestamp="2026-02-03 10:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:34.831749673 +0000 UTC m=+324.987725802" watchObservedRunningTime="2026-02-03 10:07:34.854638204 +0000 UTC m=+325.010614333" Feb 03 10:07:35 crc kubenswrapper[5010]: I0203 10:07:35.817854 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerStarted","Data":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} Feb 03 10:07:35 crc kubenswrapper[5010]: I0203 10:07:35.818249 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerStarted","Data":"39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d"} Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.823446 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.829439 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:36 crc kubenswrapper[5010]: I0203 10:07:36.848431 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" podStartSLOduration=4.848410467 podStartE2EDuration="4.848410467s" podCreationTimestamp="2026-02-03 10:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:35.836729465 +0000 UTC m=+325.992705614" watchObservedRunningTime="2026-02-03 10:07:36.848410467 +0000 UTC m=+327.004386606" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.314830 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.315373 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" containerID="cri-o://b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" gracePeriod=30 Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.335004 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.733008 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777624 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777740 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777757 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.777816 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") pod \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\" (UID: \"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea\") " Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.778985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.779100 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.779353 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config" (OuterVolumeSpecName: "config") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.783935 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.784066 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6" (OuterVolumeSpecName: "kube-api-access-dsqq6") pod "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" (UID: "b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea"). InnerVolumeSpecName "kube-api-access-dsqq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835002 5010 generic.go:334] "Generic (PLEG): container finished" podID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" exitCode=0 Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835417 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835398 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerDied","Data":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h" event={"ID":"b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea","Type":"ContainerDied","Data":"1c4e6d1216d15486952944a34883d2752e446df691ee61abdfb4affcfd9e809d"} Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.835810 5010 scope.go:117] "RemoveContainer" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.852469 5010 scope.go:117] "RemoveContainer" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: E0203 10:07:38.853243 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": container with ID starting with b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e not found: ID does not exist" containerID="b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.853306 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e"} err="failed to get container status \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": rpc error: code = NotFound desc = could not find container \"b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e\": container with ID starting with b0660ddfedaa25e959204ee75fbb833e3e5894c77394f8ec6ebb9222957ce61e not found: ID does not exist" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.865058 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.868934 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-cjf6h"] Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878903 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878938 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsqq6\" (UniqueName: \"kubernetes.io/projected/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-kube-api-access-dsqq6\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878950 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878961 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:38 crc kubenswrapper[5010]: I0203 10:07:38.878972 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.840736 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" containerID="cri-o://681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" gracePeriod=30 Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991428 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:39 crc kubenswrapper[5010]: E0203 10:07:39.991687 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991704 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.991827 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" containerName="controller-manager" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.992264 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.996984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.997193 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998201 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998432 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:07:39 crc kubenswrapper[5010]: I0203 10:07:39.998684 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.001131 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.001681 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092363 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092678 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092706 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.092755 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197182 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.197262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.198565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.198565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.202021 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.202651 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.215917 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"controller-manager-6cb96b48f7-5mzp6\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.251099 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298872 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.298926 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") pod \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\" (UID: \"b13d6ce0-d473-4529-89a4-2e7b8ad864b3\") " Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.299769 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config" (OuterVolumeSpecName: "config") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.299865 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.302791 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.305356 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z" (OuterVolumeSpecName: "kube-api-access-qjv6z") pod "b13d6ce0-d473-4529-89a4-2e7b8ad864b3" (UID: "b13d6ce0-d473-4529-89a4-2e7b8ad864b3"). InnerVolumeSpecName "kube-api-access-qjv6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.330697 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400780 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400820 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400837 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjv6z\" (UniqueName: \"kubernetes.io/projected/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-kube-api-access-qjv6z\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.400855 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b13d6ce0-d473-4529-89a4-2e7b8ad864b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.509799 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea" path="/var/lib/kubelet/pods/b6c2f4f4-f133-4244-b6dc-5fda3c6f28ea/volumes" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.565277 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:07:40 crc kubenswrapper[5010]: W0203 10:07:40.566189 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91761982_f6eb_4427_9ca6_274992d3ecc4.slice/crio-05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd WatchSource:0}: Error finding container 05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd: Status 404 returned error can't find the container with id 05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerStarted","Data":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847960 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.847974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerStarted","Data":"05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848743 5010 generic.go:334] "Generic (PLEG): container finished" podID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" exitCode=0 Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerDied","Data":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" event={"ID":"b13d6ce0-d473-4529-89a4-2e7b8ad864b3","Type":"ContainerDied","Data":"39efd5ea97ac3b2dc44326e763a027b144e99ab980f51894254b44b9a8a1f54d"} Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848798 5010 scope.go:117] "RemoveContainer" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.848911 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.859045 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.874966 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" podStartSLOduration=2.874946413 podStartE2EDuration="2.874946413s" podCreationTimestamp="2026-02-03 10:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:40.871206804 +0000 UTC m=+331.027182933" watchObservedRunningTime="2026-02-03 10:07:40.874946413 +0000 UTC m=+331.030922542" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.877932 5010 scope.go:117] "RemoveContainer" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: E0203 10:07:40.878288 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": container with ID starting with 681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9 not found: ID does not exist" containerID="681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.878330 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9"} err="failed to get container status \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": rpc error: code = NotFound desc = could not find container \"681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9\": container with ID starting with 681d13b39d1655f21a90af5ef2d9b470f6389a29c6f81c1197009d96aaa2a1f9 not found: ID does not exist" Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.885927 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:40 crc kubenswrapper[5010]: I0203 10:07:40.888462 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-6dhjw"] Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.509577 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" path="/var/lib/kubelet/pods/b13d6ce0-d473-4529-89a4-2e7b8ad864b3/volumes" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.989920 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:42 crc kubenswrapper[5010]: E0203 10:07:42.990175 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990196 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990340 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13d6ce0-d473-4529-89a4-2e7b8ad864b3" containerName="route-controller-manager" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.990805 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.992560 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.993029 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.993964 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.994255 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.994462 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:07:42 crc kubenswrapper[5010]: I0203 10:07:42.997194 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.001460 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033761 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033844 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.033960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134502 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134555 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.134654 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.135559 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.136671 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.141779 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.158583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"route-controller-manager-5dcb9544cc-cd6nz\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.307854 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.698463 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.868441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerStarted","Data":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.868827 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerStarted","Data":"25d16be7d88ebfce6abf5288d6a1be5994b1be679a832ffc963f1662c6ecad64"} Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.869271 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.871005 5010 patch_prober.go:28] interesting pod/route-controller-manager-5dcb9544cc-cd6nz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.871055 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 03 10:07:43 crc kubenswrapper[5010]: I0203 10:07:43.911806 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podStartSLOduration=5.911791624 podStartE2EDuration="5.911791624s" podCreationTimestamp="2026-02-03 10:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:07:43.909900623 +0000 UTC m=+334.065876752" watchObservedRunningTime="2026-02-03 10:07:43.911791624 +0000 UTC m=+334.067767753" Feb 03 10:07:44 crc kubenswrapper[5010]: I0203 10:07:44.880261 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.402587 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.403654 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" containerID="cri-o://238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" gracePeriod=30 Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.914373 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935168 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935261 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935288 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935352 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.935428 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") pod \"91761982-f6eb-4427-9ca6-274992d3ecc4\" (UID: \"91761982-f6eb-4427-9ca6-274992d3ecc4\") " Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.936148 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.936574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.937580 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config" (OuterVolumeSpecName: "config") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.956602 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7" (OuterVolumeSpecName: "kube-api-access-ws4n7") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "kube-api-access-ws4n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:06 crc kubenswrapper[5010]: I0203 10:08:06.961649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "91761982-f6eb-4427-9ca6-274992d3ecc4" (UID: "91761982-f6eb-4427-9ca6-274992d3ecc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037065 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037108 5010 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037126 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws4n7\" (UniqueName: \"kubernetes.io/projected/91761982-f6eb-4427-9ca6-274992d3ecc4-kube-api-access-ws4n7\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037139 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91761982-f6eb-4427-9ca6-274992d3ecc4-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.037151 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91761982-f6eb-4427-9ca6-274992d3ecc4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193068 5010 generic.go:334] "Generic (PLEG): container finished" podID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" exitCode=0 Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerDied","Data":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193129 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6" event={"ID":"91761982-f6eb-4427-9ca6-274992d3ecc4","Type":"ContainerDied","Data":"05f43ef7831519075585445aeedd267d98d6ff0e1d8a989c20d1a24d5d0d35fd"} Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.193175 5010 scope.go:117] "RemoveContainer" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.212369 5010 scope.go:117] "RemoveContainer" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: E0203 10:08:07.212967 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": container with ID starting with 238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6 not found: ID does not exist" containerID="238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.213115 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6"} err="failed to get container status \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": rpc error: code = NotFound desc = could not find container \"238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6\": container with ID starting with 238f90349420137aab22179abf9df27712cfbcc77c105f08c7769016243670f6 not found: ID does not exist" Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.237934 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:07 crc kubenswrapper[5010]: I0203 10:08:07.243855 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-5mzp6"] Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002333 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:08 crc kubenswrapper[5010]: E0203 10:08:08.002553 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002567 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.002668 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" containerName="controller-manager" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.003030 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.005538 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.005872 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006058 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006339 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.006770 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.014357 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.016509 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048518 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048557 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.048681 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.149872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150125 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.150230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.151058 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-proxy-ca-bundles\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.151141 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-client-ca\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.152265 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-config\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.156321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-serving-cert\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.173228 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd4qh\" (UniqueName: \"kubernetes.io/projected/49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2-kube-api-access-bd4qh\") pod \"controller-manager-5d5bd7d9c6-lw68q\" (UID: \"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2\") " pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.385136 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.508723 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91761982-f6eb-4427-9ca6-274992d3ecc4" path="/var/lib/kubelet/pods/91761982-f6eb-4427-9ca6-274992d3ecc4/volumes" Feb 03 10:08:08 crc kubenswrapper[5010]: I0203 10:08:08.768823 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q"] Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" event={"ID":"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2","Type":"ContainerStarted","Data":"28248090b01669d75346e2b8e920ede3336868da0bf379c5facee834ccde111b"} Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204648 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" event={"ID":"49fea28f-ef6a-4010-a0c5-a2d3c0ff06c2","Type":"ContainerStarted","Data":"b970ef5622cdee8c14fcff17c79ffb9eac42f7837974356dad930d3ef4056e23"} Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.204830 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.224498 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" podStartSLOduration=3.224476995 podStartE2EDuration="3.224476995s" podCreationTimestamp="2026-02-03 10:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:09.222472592 +0000 UTC m=+359.378448721" watchObservedRunningTime="2026-02-03 10:08:09.224476995 +0000 UTC m=+359.380453124" Feb 03 10:08:09 crc kubenswrapper[5010]: I0203 10:08:09.229298 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d5bd7d9c6-lw68q" Feb 03 10:08:16 crc kubenswrapper[5010]: I0203 10:08:16.389903 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:08:16 crc kubenswrapper[5010]: I0203 10:08:16.390679 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.408983 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.410256 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" containerID="cri-o://94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" gracePeriod=30 Feb 03 10:08:26 crc kubenswrapper[5010]: I0203 10:08:26.886813 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.004658 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006329 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.006999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca" (OuterVolumeSpecName: "client-ca") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.007303 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") pod \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\" (UID: \"8628475b-46cd-4b61-8aa2-d36a3fe3af47\") " Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.007911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config" (OuterVolumeSpecName: "config") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.009790 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.010133 5010 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8628475b-46cd-4b61-8aa2-d36a3fe3af47-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.014376 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89" (OuterVolumeSpecName: "kube-api-access-csb89") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "kube-api-access-csb89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.015161 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8628475b-46cd-4b61-8aa2-d36a3fe3af47" (UID: "8628475b-46cd-4b61-8aa2-d36a3fe3af47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.111946 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csb89\" (UniqueName: \"kubernetes.io/projected/8628475b-46cd-4b61-8aa2-d36a3fe3af47-kube-api-access-csb89\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.111982 5010 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8628475b-46cd-4b61-8aa2-d36a3fe3af47-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.300928 5010 generic.go:334] "Generic (PLEG): container finished" podID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" exitCode=0 Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.300993 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.301012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerDied","Data":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.302023 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz" event={"ID":"8628475b-46cd-4b61-8aa2-d36a3fe3af47","Type":"ContainerDied","Data":"25d16be7d88ebfce6abf5288d6a1be5994b1be679a832ffc963f1662c6ecad64"} Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.302102 5010 scope.go:117] "RemoveContainer" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.338094 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.344108 5010 scope.go:117] "RemoveContainer" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: E0203 10:08:27.344988 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": container with ID starting with 94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e not found: ID does not exist" containerID="94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.345072 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e"} err="failed to get container status \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": rpc error: code = NotFound desc = could not find container \"94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e\": container with ID starting with 94b24c365f61bf9d12c80fba24155c0cfdde64110501fdaf9f56fd39b9e1b75e not found: ID does not exist" Feb 03 10:08:27 crc kubenswrapper[5010]: I0203 10:08:27.349365 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-cd6nz"] Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.012414 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: E0203 10:08:28.013857 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.013943 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.014159 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" containerName="route-controller-manager" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.014940 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.016856 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017152 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017326 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.017686 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.019066 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.020096 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.025809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127078 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127156 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127577 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.127646 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229005 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229086 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229170 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.229255 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.230356 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-client-ca\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.230515 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58aa9ea0-6947-49a1-80ca-71542cbdd2df-config\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.240503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58aa9ea0-6947-49a1-80ca-71542cbdd2df-serving-cert\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.248787 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmlmb\" (UniqueName: \"kubernetes.io/projected/58aa9ea0-6947-49a1-80ca-71542cbdd2df-kube-api-access-gmlmb\") pod \"route-controller-manager-bc8d5fc56-ch7b9\" (UID: \"58aa9ea0-6947-49a1-80ca-71542cbdd2df\") " pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.366322 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.510308 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8628475b-46cd-4b61-8aa2-d36a3fe3af47" path="/var/lib/kubelet/pods/8628475b-46cd-4b61-8aa2-d36a3fe3af47/volumes" Feb 03 10:08:28 crc kubenswrapper[5010]: I0203 10:08:28.787327 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9"] Feb 03 10:08:28 crc kubenswrapper[5010]: W0203 10:08:28.799653 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58aa9ea0_6947_49a1_80ca_71542cbdd2df.slice/crio-adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c WatchSource:0}: Error finding container adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c: Status 404 returned error can't find the container with id adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.313929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" event={"ID":"58aa9ea0-6947-49a1-80ca-71542cbdd2df","Type":"ContainerStarted","Data":"90d97ef6e79f118ad5af9abaddf5b989d898abbb9509c202bb53eceef6ac6be3"} Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.314295 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" event={"ID":"58aa9ea0-6947-49a1-80ca-71542cbdd2df","Type":"ContainerStarted","Data":"adac1350597f4d37be65c4cdb5d880f9ec298abe66b167d4cb606e3c20877c1c"} Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.314712 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.323838 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" Feb 03 10:08:29 crc kubenswrapper[5010]: I0203 10:08:29.342232 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bc8d5fc56-ch7b9" podStartSLOduration=3.342199169 podStartE2EDuration="3.342199169s" podCreationTimestamp="2026-02-03 10:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:29.339053715 +0000 UTC m=+379.495029864" watchObservedRunningTime="2026-02-03 10:08:29.342199169 +0000 UTC m=+379.498175298" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.173816 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.175177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.195631 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268718 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268781 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268805 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268957 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.268999 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.269038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.290982 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370640 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370802 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.370817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-certificates\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372157 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72291d2a-e172-4670-9df7-c4de79cab1a1-trusted-ca\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.372298 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72291d2a-e172-4670-9df7-c4de79cab1a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.376770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-registry-tls\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.382881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72291d2a-e172-4670-9df7-c4de79cab1a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.387647 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx798\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-kube-api-access-gx798\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.391897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72291d2a-e172-4670-9df7-c4de79cab1a1-bound-sa-token\") pod \"image-registry-66df7c8f76-fgqs4\" (UID: \"72291d2a-e172-4670-9df7-c4de79cab1a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.400269 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.400500 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhsmk" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" containerID="cri-o://3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.420433 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.420763 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f8ldc" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" containerID="cri-o://6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.435294 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.435551 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" containerID="cri-o://a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.452687 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.453277 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w967c" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" containerID="cri-o://d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.454367 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.455469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.462182 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.462413 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" containerID="cri-o://64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" gracePeriod=30 Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.466024 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.492197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.572919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.572985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.573275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674728 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674814 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.674838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.680197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.681629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.697004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j8h\" (UniqueName: \"kubernetes.io/projected/a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe-kube-api-access-q9j8h\") pod \"marketplace-operator-79b997595-lskbc\" (UID: \"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe\") " pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.762667 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.835822 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.837776 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.838805 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 10:08:31 crc kubenswrapper[5010]: E0203 10:08:31.838839 5010 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5pgxf" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:31 crc kubenswrapper[5010]: I0203 10:08:31.982015 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081050 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.081173 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") pod \"6b321403-09c3-4199-98ce-474deeea9d18\" (UID: \"6b321403-09c3-4199-98ce-474deeea9d18\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.082276 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities" (OuterVolumeSpecName: "utilities") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.099157 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl" (OuterVolumeSpecName: "kube-api-access-8rkwl") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "kube-api-access-8rkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.150401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b321403-09c3-4199-98ce-474deeea9d18" (UID: "6b321403-09c3-4199-98ce-474deeea9d18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182862 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182913 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b321403-09c3-4199-98ce-474deeea9d18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.182932 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rkwl\" (UniqueName: \"kubernetes.io/projected/6b321403-09c3-4199-98ce-474deeea9d18-kube-api-access-8rkwl\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.199894 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.206239 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.212437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.255193 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.257095 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgqs4"] Feb 03 10:08:32 crc kubenswrapper[5010]: W0203 10:08:32.269434 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72291d2a_e172_4670_9df7_c4de79cab1a1.slice/crio-bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd WatchSource:0}: Error finding container bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd: Status 404 returned error can't find the container with id bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287311 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287625 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287678 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287702 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287719 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287744 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") pod \"5a09b802-00fe-4ff8-983e-58c495061478\" (UID: \"5a09b802-00fe-4ff8-983e-58c495061478\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") pod \"778b346c-f503-4364-9757-98c213d89edc\" (UID: \"778b346c-f503-4364-9757-98c213d89edc\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.287842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") pod \"1b5592be-8839-4660-a4c4-ab662fc975eb\" (UID: \"1b5592be-8839-4660-a4c4-ab662fc975eb\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289149 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities" (OuterVolumeSpecName: "utilities") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.289414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities" (OuterVolumeSpecName: "utilities") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.291558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts" (OuterVolumeSpecName: "kube-api-access-pmnts") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "kube-api-access-pmnts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.291577 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1b5592be-8839-4660-a4c4-ab662fc975eb" (UID: "1b5592be-8839-4660-a4c4-ab662fc975eb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.296226 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs" (OuterVolumeSpecName: "kube-api-access-vjvqs") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "kube-api-access-vjvqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.297141 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w" (OuterVolumeSpecName: "kube-api-access-mw58w") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "kube-api-access-mw58w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.329845 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "778b346c-f503-4364-9757-98c213d89edc" (UID: "778b346c-f503-4364-9757-98c213d89edc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341008 5010 generic.go:334] "Generic (PLEG): container finished" podID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341076 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341104 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pgxf" event={"ID":"777b0b1e-96c3-4914-8b7b-d51186433cb7","Type":"ContainerDied","Data":"3ee4a0547eec3952db79e960939ddf437d022a2d426d7a0f64071f60145150ba"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341119 5010 scope.go:117] "RemoveContainer" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.341239 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pgxf" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.347081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a09b802-00fe-4ff8-983e-58c495061478" (UID: "5a09b802-00fe-4ff8-983e-58c495061478"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351045 5010 generic.go:334] "Generic (PLEG): container finished" podID="778b346c-f503-4364-9757-98c213d89edc" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351093 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351119 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w967c" event={"ID":"778b346c-f503-4364-9757-98c213d89edc","Type":"ContainerDied","Data":"ccc904854d56565749138df195a8c2b29f6946a5393227b9fe1b124f630fe4e6"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.351185 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w967c" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355279 5010 generic.go:334] "Generic (PLEG): container finished" podID="5a09b802-00fe-4ff8-983e-58c495061478" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355348 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355370 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ldc" event={"ID":"5a09b802-00fe-4ff8-983e-58c495061478","Type":"ContainerDied","Data":"9b3e23c6c17315ac65a0626a6f5dc6fcfc45753c23f65c38f8420f31fc344706"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.355428 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ldc" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.357782 5010 generic.go:334] "Generic (PLEG): container finished" podID="6b321403-09c3-4199-98ce-474deeea9d18" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358146 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhsmk" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.358637 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhsmk" event={"ID":"6b321403-09c3-4199-98ce-474deeea9d18","Type":"ContainerDied","Data":"63d8474bfb4a1a954341a0c6e3ac0ed4a51edc38981d0b3fd911b0c631516f52"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.359642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" event={"ID":"72291d2a-e172-4670-9df7-c4de79cab1a1","Type":"ContainerStarted","Data":"bdcfcd819707a008d216ee28c8a59fdebeca7cc15a6cf4579f372782cccc49dd"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360878 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" exitCode=0 Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360926 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerDied","Data":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.360945 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.361000 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kg4f" event={"ID":"1b5592be-8839-4660-a4c4-ab662fc975eb","Type":"ContainerDied","Data":"2ade3cdf2529ce4152b52a6e4a45299bf6c1e2325f1341f2c73a3d85ad1e71e8"} Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.376165 5010 scope.go:117] "RemoveContainer" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.378939 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.382250 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w967c"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.388934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389002 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389036 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") pod \"777b0b1e-96c3-4914-8b7b-d51186433cb7\" (UID: \"777b0b1e-96c3-4914-8b7b-d51186433cb7\") " Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389435 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmnts\" (UniqueName: \"kubernetes.io/projected/1b5592be-8839-4660-a4c4-ab662fc975eb-kube-api-access-pmnts\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389457 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389470 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389480 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw58w\" (UniqueName: \"kubernetes.io/projected/778b346c-f503-4364-9757-98c213d89edc-kube-api-access-mw58w\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389491 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjvqs\" (UniqueName: \"kubernetes.io/projected/5a09b802-00fe-4ff8-983e-58c495061478-kube-api-access-vjvqs\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389505 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778b346c-f503-4364-9757-98c213d89edc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389514 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389522 5010 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1b5592be-8839-4660-a4c4-ab662fc975eb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389534 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a09b802-00fe-4ff8-983e-58c495061478-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.389799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities" (OuterVolumeSpecName: "utilities") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.392525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg" (OuterVolumeSpecName: "kube-api-access-ndvzg") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "kube-api-access-ndvzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.411358 5010 scope.go:117] "RemoveContainer" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.423818 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lskbc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.450428 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.452294 5010 scope.go:117] "RemoveContainer" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.458235 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhsmk"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.458958 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.459751 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": container with ID starting with 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b not found: ID does not exist" containerID="64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.459803 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b"} err="failed to get container status \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": rpc error: code = NotFound desc = could not find container \"64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b\": container with ID starting with 64f520ca0095faa44f88b1689ecd864056756f6514ec3fd8f8376186379bc68b not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.459833 5010 scope.go:117] "RemoveContainer" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.460302 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": container with ID starting with 8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e not found: ID does not exist" containerID="8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.460332 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e"} err="failed to get container status \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": rpc error: code = NotFound desc = could not find container \"8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e\": container with ID starting with 8155e7f2f727e4e9e74359fe98f1783e8c9b620a89fe732296fe63f5146a208e not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.460352 5010 scope.go:117] "RemoveContainer" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.460962 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": container with ID starting with fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1 not found: ID does not exist" containerID="fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461005 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1"} err="failed to get container status \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": rpc error: code = NotFound desc = could not find container \"fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1\": container with ID starting with fca3a0de046b6aa0bbd88f4d836f2482bd38d25ab3a9c5bce8610c44b5a5caf1 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461033 5010 scope.go:117] "RemoveContainer" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.461710 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kg4f"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.477104 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.482813 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f8ldc"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.485766 5010 scope.go:117] "RemoveContainer" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.494647 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndvzg\" (UniqueName: \"kubernetes.io/projected/777b0b1e-96c3-4914-8b7b-d51186433cb7-kube-api-access-ndvzg\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.494673 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.509269 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" path="/var/lib/kubelet/pods/1b5592be-8839-4660-a4c4-ab662fc975eb/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.510398 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a09b802-00fe-4ff8-983e-58c495061478" path="/var/lib/kubelet/pods/5a09b802-00fe-4ff8-983e-58c495061478/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.511274 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b321403-09c3-4199-98ce-474deeea9d18" path="/var/lib/kubelet/pods/6b321403-09c3-4199-98ce-474deeea9d18/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.512356 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778b346c-f503-4364-9757-98c213d89edc" path="/var/lib/kubelet/pods/778b346c-f503-4364-9757-98c213d89edc/volumes" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.522498 5010 scope.go:117] "RemoveContainer" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.539183 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777b0b1e-96c3-4914-8b7b-d51186433cb7" (UID: "777b0b1e-96c3-4914-8b7b-d51186433cb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545394 5010 scope.go:117] "RemoveContainer" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.545789 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": container with ID starting with d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08 not found: ID does not exist" containerID="d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08"} err="failed to get container status \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": rpc error: code = NotFound desc = could not find container \"d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08\": container with ID starting with d89e77dc83f60b599c8127f09cd6112d1532867e0fd87ea0ee76f0f55fa29d08 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.545855 5010 scope.go:117] "RemoveContainer" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.546339 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": container with ID starting with 699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd not found: ID does not exist" containerID="699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546398 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd"} err="failed to get container status \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": rpc error: code = NotFound desc = could not find container \"699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd\": container with ID starting with 699afee0a95665e8a36e41507d5ccbe7b3ccff56912d72c7d06a736bf812bbdd not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546424 5010 scope.go:117] "RemoveContainer" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.546759 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": container with ID starting with c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39 not found: ID does not exist" containerID="c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546784 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39"} err="failed to get container status \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": rpc error: code = NotFound desc = could not find container \"c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39\": container with ID starting with c81b301246f1acefeee01e3df5b61b48f31087c63825e8dbd41865fd47f36a39 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.546822 5010 scope.go:117] "RemoveContainer" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.563733 5010 scope.go:117] "RemoveContainer" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.577105 5010 scope.go:117] "RemoveContainer" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.595402 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777b0b1e-96c3-4914-8b7b-d51186433cb7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596337 5010 scope.go:117] "RemoveContainer" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.596816 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": container with ID starting with 6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac not found: ID does not exist" containerID="6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596846 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac"} err="failed to get container status \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": rpc error: code = NotFound desc = could not find container \"6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac\": container with ID starting with 6e1c966bf09028759b906c0bd435e7ef3182493ca2b182bc26917ad117ddd0ac not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.596888 5010 scope.go:117] "RemoveContainer" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.597263 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": container with ID starting with f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50 not found: ID does not exist" containerID="f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597298 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50"} err="failed to get container status \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": rpc error: code = NotFound desc = could not find container \"f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50\": container with ID starting with f7246dd3bc99c4cd6a1502b56f24cd3f2d35a480eabcd5540eeeffabedaf8c50 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597318 5010 scope.go:117] "RemoveContainer" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.597634 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": container with ID starting with fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908 not found: ID does not exist" containerID="fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597663 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908"} err="failed to get container status \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": rpc error: code = NotFound desc = could not find container \"fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908\": container with ID starting with fb38973c90eca1b297983e38725d0efd4de1191c9f324379b771a27b35bf9908 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.597757 5010 scope.go:117] "RemoveContainer" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.620670 5010 scope.go:117] "RemoveContainer" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.634781 5010 scope.go:117] "RemoveContainer" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647381 5010 scope.go:117] "RemoveContainer" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.647776 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": container with ID starting with 3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f not found: ID does not exist" containerID="3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647812 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f"} err="failed to get container status \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": rpc error: code = NotFound desc = could not find container \"3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f\": container with ID starting with 3fdffdfb2e97163e9b5659b82f9edb3a8717dbc250d60105f3b5033d16ea361f not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.647835 5010 scope.go:117] "RemoveContainer" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.648061 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": container with ID starting with ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e not found: ID does not exist" containerID="ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648093 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e"} err="failed to get container status \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": rpc error: code = NotFound desc = could not find container \"ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e\": container with ID starting with ad30fa1f7476d320a459e2e205f7b55a08c426642d715abf9ce2c1d8b8336f6e not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648116 5010 scope.go:117] "RemoveContainer" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.648369 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": container with ID starting with bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47 not found: ID does not exist" containerID="bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47"} err="failed to get container status \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": rpc error: code = NotFound desc = could not find container \"bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47\": container with ID starting with bcd8a889807bd25445dfb722549faf19cd01bc11e1f8fd1048942ecd1b7beb47 not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.648449 5010 scope.go:117] "RemoveContainer" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.669534 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.669706 5010 scope.go:117] "RemoveContainer" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: E0203 10:08:32.671083 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": container with ID starting with a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b not found: ID does not exist" containerID="a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.671114 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b"} err="failed to get container status \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": rpc error: code = NotFound desc = could not find container \"a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b\": container with ID starting with a767b05b55c4a6678814ffc20e2864d886a73b266a38944636faa5166130a50b not found: ID does not exist" Feb 03 10:08:32 crc kubenswrapper[5010]: I0203 10:08:32.674686 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pgxf"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.367558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" event={"ID":"72291d2a-e172-4670-9df7-c4de79cab1a1","Type":"ContainerStarted","Data":"d2a25ce869bce00299f0a36e2bb34ce27b46d433c773c7af24e6c88b7046ec27"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.368793 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" event={"ID":"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe","Type":"ContainerStarted","Data":"0c5d00a618b4fe3bf12bea8272155363e2ac87eb3b57761a6bc995e47e6d7e8e"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" event={"ID":"a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe","Type":"ContainerStarted","Data":"8c053b62d9c03e959bb50f47c15edab6c6f4fc5f6b6bd852c66e0416a6f03de1"} Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.370606 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.374418 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.398765 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" podStartSLOduration=2.398745445 podStartE2EDuration="2.398745445s" podCreationTimestamp="2026-02-03 10:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:33.393584198 +0000 UTC m=+383.549560337" watchObservedRunningTime="2026-02-03 10:08:33.398745445 +0000 UTC m=+383.554721574" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.423367 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lskbc" podStartSLOduration=2.423352002 podStartE2EDuration="2.423352002s" podCreationTimestamp="2026-02-03 10:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:08:33.420094505 +0000 UTC m=+383.576070634" watchObservedRunningTime="2026-02-03 10:08:33.423352002 +0000 UTC m=+383.579328131" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.614938 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615195 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615207 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615234 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615245 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615254 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615266 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615275 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615285 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615293 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615306 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615315 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="extract-utilities" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615327 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615335 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615347 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615355 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615366 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615374 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615387 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615423 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615431 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615455 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615462 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="extract-content" Feb 03 10:08:33 crc kubenswrapper[5010]: E0203 10:08:33.615473 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615481 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615603 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b5592be-8839-4660-a4c4-ab662fc975eb" containerName="marketplace-operator" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615622 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09b802-00fe-4ff8-983e-58c495061478" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615637 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b321403-09c3-4199-98ce-474deeea9d18" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615647 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="778b346c-f503-4364-9757-98c213d89edc" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.615655 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" containerName="registry-server" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.616489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.621858 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.627193 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709626 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.709869 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811459 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811477 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811895 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-catalog-content\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.811957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a04fc61-013a-4515-92ca-e620b3d376d5-utilities\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.815605 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.817247 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.820432 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.825187 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.837412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdrmx\" (UniqueName: \"kubernetes.io/projected/0a04fc61-013a-4515-92ca-e620b3d376d5-kube-api-access-jdrmx\") pod \"redhat-marketplace-96wzf\" (UID: \"0a04fc61-013a-4515-92ca-e620b3d376d5\") " pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912335 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912396 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.912429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:33 crc kubenswrapper[5010]: I0203 10:08:33.941804 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013690 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.013757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.014420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-catalog-content\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.016564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4caad6-6b6c-452e-9be8-97e7115dbd72-utilities\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.037321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwmm5\" (UniqueName: \"kubernetes.io/projected/1b4caad6-6b6c-452e-9be8-97e7115dbd72-kube-api-access-qwmm5\") pod \"redhat-operators-gz7lx\" (UID: \"1b4caad6-6b6c-452e-9be8-97e7115dbd72\") " pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.136752 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.422142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96wzf"] Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.508962 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777b0b1e-96c3-4914-8b7b-d51186433cb7" path="/var/lib/kubelet/pods/777b0b1e-96c3-4914-8b7b-d51186433cb7/volumes" Feb 03 10:08:34 crc kubenswrapper[5010]: I0203 10:08:34.530020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz7lx"] Feb 03 10:08:34 crc kubenswrapper[5010]: W0203 10:08:34.535905 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b4caad6_6b6c_452e_9be8_97e7115dbd72.slice/crio-6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809 WatchSource:0}: Error finding container 6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809: Status 404 returned error can't find the container with id 6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.387906 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b4caad6-6b6c-452e-9be8-97e7115dbd72" containerID="649d5d5889619b3db5484b734f48a0f661f1b37c23ecc0ba2567cbcf312dac49" exitCode=0 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.388267 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerDied","Data":"649d5d5889619b3db5484b734f48a0f661f1b37c23ecc0ba2567cbcf312dac49"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.388704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerStarted","Data":"6507aa35e6193590d1824596d95b8a0a21eb5c3a7b78806fc58e58b064b04809"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.390895 5010 generic.go:334] "Generic (PLEG): container finished" podID="0a04fc61-013a-4515-92ca-e620b3d376d5" containerID="4a9e4cdd3bd69602ab7a8af75d7d073fc432b07568f14b8b4f8329cc3a161d22" exitCode=0 Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.391031 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerDied","Data":"4a9e4cdd3bd69602ab7a8af75d7d073fc432b07568f14b8b4f8329cc3a161d22"} Feb 03 10:08:35 crc kubenswrapper[5010]: I0203 10:08:35.391057 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerStarted","Data":"fead6303a7ed8b14298a3b3d0e23569f8415a1b5b1c37a523c55ffa0829f0f01"} Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.022535 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.023676 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.025483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.028918 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.041986 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.042035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.042252 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143600 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.143655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.144165 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-catalog-content\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.144248 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41f0db19-3c04-4062-94da-f2058d7ef64a-utilities\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.162487 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s9r\" (UniqueName: \"kubernetes.io/projected/41f0db19-3c04-4062-94da-f2058d7ef64a-kube-api-access-z5s9r\") pod \"community-operators-7dtrz\" (UID: \"41f0db19-3c04-4062-94da-f2058d7ef64a\") " pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.216118 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.217368 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.223744 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.226265 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249787 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.249967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.341086 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.351857 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352306 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352589 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-utilities\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.352666 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/499eebdd-1202-4427-bf19-7ff14c5f8507-catalog-content\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.370284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhfz\" (UniqueName: \"kubernetes.io/projected/499eebdd-1202-4427-bf19-7ff14c5f8507-kube-api-access-tzhfz\") pod \"certified-operators-xwfjv\" (UID: \"499eebdd-1202-4427-bf19-7ff14c5f8507\") " pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.568126 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.752924 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dtrz"] Feb 03 10:08:36 crc kubenswrapper[5010]: W0203 10:08:36.755876 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f0db19_3c04_4062_94da_f2058d7ef64a.slice/crio-f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a WatchSource:0}: Error finding container f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a: Status 404 returned error can't find the container with id f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a Feb 03 10:08:36 crc kubenswrapper[5010]: I0203 10:08:36.966527 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xwfjv"] Feb 03 10:08:36 crc kubenswrapper[5010]: W0203 10:08:36.975429 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499eebdd_1202_4427_bf19_7ff14c5f8507.slice/crio-9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e WatchSource:0}: Error finding container 9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e: Status 404 returned error can't find the container with id 9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403181 5010 generic.go:334] "Generic (PLEG): container finished" podID="41f0db19-3c04-4062-94da-f2058d7ef64a" containerID="b063329c753357a7ed3b9d6bec1638bc687c9277b9fe6b16859d4133fd1fc6a0" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403634 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerDied","Data":"b063329c753357a7ed3b9d6bec1638bc687c9277b9fe6b16859d4133fd1fc6a0"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.403668 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerStarted","Data":"f33131e79e384fe2afe7360729e82c466d2dd7daf96c2ed6415e011ae52ad36a"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.409167 5010 generic.go:334] "Generic (PLEG): container finished" podID="1b4caad6-6b6c-452e-9be8-97e7115dbd72" containerID="f947c5d43a1cd178ea6882c8a748cf0e0703d0960f92472c74bb48b670787162" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.409234 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerDied","Data":"f947c5d43a1cd178ea6882c8a748cf0e0703d0960f92472c74bb48b670787162"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.411614 5010 generic.go:334] "Generic (PLEG): container finished" podID="0a04fc61-013a-4515-92ca-e620b3d376d5" containerID="66e007c709fe7f7d9122d566e528247b7a5744b4d9c113cda7640fdb7f2392b8" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.411677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerDied","Data":"66e007c709fe7f7d9122d566e528247b7a5744b4d9c113cda7640fdb7f2392b8"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414408 5010 generic.go:334] "Generic (PLEG): container finished" podID="499eebdd-1202-4427-bf19-7ff14c5f8507" containerID="266977f1c8826bf4506937bae4a2203a1b45ad313184b03a6022c3e9a2e18bec" exitCode=0 Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414450 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerDied","Data":"266977f1c8826bf4506937bae4a2203a1b45ad313184b03a6022c3e9a2e18bec"} Feb 03 10:08:37 crc kubenswrapper[5010]: I0203 10:08:37.414473 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"9930daa4cb3b17269e2f4ce3847ee42981d1f4d57104af430b72251a6b0c459e"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.422094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz7lx" event={"ID":"1b4caad6-6b6c-452e-9be8-97e7115dbd72","Type":"ContainerStarted","Data":"1bcfe5244cc922aa84a6a40e4680d517665f0a49f6f2b53318e7bc167e38eb2c"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.424072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96wzf" event={"ID":"0a04fc61-013a-4515-92ca-e620b3d376d5","Type":"ContainerStarted","Data":"d68bd3a14f1325b87821010ebd48ce066009ad4fb502b7564ded43783c7668c5"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.426043 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.427791 5010 generic.go:334] "Generic (PLEG): container finished" podID="41f0db19-3c04-4062-94da-f2058d7ef64a" containerID="c4b36012a304b17c9b9fadc9e622391ff6944a242cfef1aba9de2a55aeb56508" exitCode=0 Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.427873 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerDied","Data":"c4b36012a304b17c9b9fadc9e622391ff6944a242cfef1aba9de2a55aeb56508"} Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.441005 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gz7lx" podStartSLOduration=2.92142193 podStartE2EDuration="5.440981411s" podCreationTimestamp="2026-02-03 10:08:33 +0000 UTC" firstStartedPulling="2026-02-03 10:08:35.389669803 +0000 UTC m=+385.545645932" lastFinishedPulling="2026-02-03 10:08:37.909229284 +0000 UTC m=+388.065205413" observedRunningTime="2026-02-03 10:08:38.439177263 +0000 UTC m=+388.595153392" watchObservedRunningTime="2026-02-03 10:08:38.440981411 +0000 UTC m=+388.596957540" Feb 03 10:08:38 crc kubenswrapper[5010]: I0203 10:08:38.497978 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-96wzf" podStartSLOduration=3.071213866 podStartE2EDuration="5.497960591s" podCreationTimestamp="2026-02-03 10:08:33 +0000 UTC" firstStartedPulling="2026-02-03 10:08:35.39218786 +0000 UTC m=+385.548163989" lastFinishedPulling="2026-02-03 10:08:37.818934585 +0000 UTC m=+387.974910714" observedRunningTime="2026-02-03 10:08:38.497811447 +0000 UTC m=+388.653787576" watchObservedRunningTime="2026-02-03 10:08:38.497960591 +0000 UTC m=+388.653936710" Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.434826 5010 generic.go:334] "Generic (PLEG): container finished" podID="499eebdd-1202-4427-bf19-7ff14c5f8507" containerID="c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a" exitCode=0 Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.434891 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerDied","Data":"c034b90c14f164a1c9b318b6bbc9cdbc987ea84f86b5e9e8ddfd80264db9be8a"} Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.438046 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dtrz" event={"ID":"41f0db19-3c04-4062-94da-f2058d7ef64a","Type":"ContainerStarted","Data":"74d0cf58551154d549c0dbe2e4f90b363b89d18105a1678c5ba367f1463377c5"} Feb 03 10:08:39 crc kubenswrapper[5010]: I0203 10:08:39.475535 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dtrz" podStartSLOduration=2.024677904 podStartE2EDuration="3.475517322s" podCreationTimestamp="2026-02-03 10:08:36 +0000 UTC" firstStartedPulling="2026-02-03 10:08:37.405632548 +0000 UTC m=+387.561608677" lastFinishedPulling="2026-02-03 10:08:38.856471966 +0000 UTC m=+389.012448095" observedRunningTime="2026-02-03 10:08:39.473459427 +0000 UTC m=+389.629435566" watchObservedRunningTime="2026-02-03 10:08:39.475517322 +0000 UTC m=+389.631493461" Feb 03 10:08:40 crc kubenswrapper[5010]: I0203 10:08:40.446109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xwfjv" event={"ID":"499eebdd-1202-4427-bf19-7ff14c5f8507","Type":"ContainerStarted","Data":"7a0f898c466476b945015975c9dbd85cf2a00daec2e0f2e319af85c44444d2b7"} Feb 03 10:08:40 crc kubenswrapper[5010]: I0203 10:08:40.481107 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xwfjv" podStartSLOduration=1.957640925 podStartE2EDuration="4.48109092s" podCreationTimestamp="2026-02-03 10:08:36 +0000 UTC" firstStartedPulling="2026-02-03 10:08:37.415705227 +0000 UTC m=+387.571681356" lastFinishedPulling="2026-02-03 10:08:39.939155222 +0000 UTC m=+390.095131351" observedRunningTime="2026-02-03 10:08:40.476064906 +0000 UTC m=+390.632041035" watchObservedRunningTime="2026-02-03 10:08:40.48109092 +0000 UTC m=+390.637067049" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.943100 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.943532 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:43 crc kubenswrapper[5010]: I0203 10:08:43.988277 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.138314 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.138353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.181656 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.499078 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-96wzf" Feb 03 10:08:44 crc kubenswrapper[5010]: I0203 10:08:44.499539 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gz7lx" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.342190 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.342591 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389306 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389852 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.389990 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.513741 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dtrz" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.569068 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.570088 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:46 crc kubenswrapper[5010]: I0203 10:08:46.611056 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:47 crc kubenswrapper[5010]: I0203 10:08:47.520373 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xwfjv" Feb 03 10:08:52 crc kubenswrapper[5010]: I0203 10:08:52.554768 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fgqs4" Feb 03 10:08:52 crc kubenswrapper[5010]: I0203 10:08:52.612318 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390280 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390765 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.390805 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.391389 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.391442 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" gracePeriod=600 Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.668689 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" exitCode=0 Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.669041 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4"} Feb 03 10:09:16 crc kubenswrapper[5010]: I0203 10:09:16.669081 5010 scope.go:117] "RemoveContainer" containerID="48b1a19c32be1c127c1cf92b658eac555af338b3f535cd6ac0efd00a3ce82deb" Feb 03 10:09:17 crc kubenswrapper[5010]: I0203 10:09:17.676627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} Feb 03 10:09:17 crc kubenswrapper[5010]: I0203 10:09:17.689070 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" containerID="cri-o://4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" gracePeriod=30 Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.081454 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131820 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131891 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131922 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.131939 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132107 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.132233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") pod \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\" (UID: \"594e9304-c63f-4d73-bcad-5258c1ebdd6d\") " Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.133001 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.133334 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.137625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.137847 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.138355 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7" (OuterVolumeSpecName: "kube-api-access-mf8k7") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "kube-api-access-mf8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.141271 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.146361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.150373 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "594e9304-c63f-4d73-bcad-5258c1ebdd6d" (UID: "594e9304-c63f-4d73-bcad-5258c1ebdd6d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233605 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8k7\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-kube-api-access-mf8k7\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233651 5010 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233661 5010 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233670 5010 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/594e9304-c63f-4d73-bcad-5258c1ebdd6d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233681 5010 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/594e9304-c63f-4d73-bcad-5258c1ebdd6d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233689 5010 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/594e9304-c63f-4d73-bcad-5258c1ebdd6d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.233697 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/594e9304-c63f-4d73-bcad-5258c1ebdd6d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684067 5010 generic.go:334] "Generic (PLEG): container finished" podID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" exitCode=0 Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684553 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerDied","Data":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684921 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-x857s" event={"ID":"594e9304-c63f-4d73-bcad-5258c1ebdd6d","Type":"ContainerDied","Data":"4d0c21608e47f2a5fbe71a063022d5430ee94df368929ef6f0cd30bef83d5cd9"} Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.684940 5010 scope.go:117] "RemoveContainer" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.707813 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.709926 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-x857s"] Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.714339 5010 scope.go:117] "RemoveContainer" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: E0203 10:09:18.715025 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": container with ID starting with 4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27 not found: ID does not exist" containerID="4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27" Feb 03 10:09:18 crc kubenswrapper[5010]: I0203 10:09:18.715068 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27"} err="failed to get container status \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": rpc error: code = NotFound desc = could not find container \"4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27\": container with ID starting with 4a5b96463e1e0cbe2a97d722ca585d361990169959ef941c87646fcf8f000d27 not found: ID does not exist" Feb 03 10:09:20 crc kubenswrapper[5010]: I0203 10:09:20.513066 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" path="/var/lib/kubelet/pods/594e9304-c63f-4d73-bcad-5258c1ebdd6d/volumes" Feb 03 10:11:10 crc kubenswrapper[5010]: I0203 10:11:10.717156 5010 scope.go:117] "RemoveContainer" containerID="9193e654b0aae87a0f6cb66b87865bff8d5a0d8845927c6e2ff446174e9141b4" Feb 03 10:11:16 crc kubenswrapper[5010]: I0203 10:11:16.389963 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:11:16 crc kubenswrapper[5010]: I0203 10:11:16.390500 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:11:46 crc kubenswrapper[5010]: I0203 10:11:46.390098 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:11:46 crc kubenswrapper[5010]: I0203 10:11:46.390824 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.390422 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391059 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391147 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.391985 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.392068 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" gracePeriod=600 Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673128 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" exitCode=0 Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673235 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6"} Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673574 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} Feb 03 10:12:16 crc kubenswrapper[5010]: I0203 10:12:16.673603 5010 scope.go:117] "RemoveContainer" containerID="f50e55cc732f578ead4018fcd8ab51937afcd54061bf1c5885e82d08d42bd4d4" Feb 03 10:13:10 crc kubenswrapper[5010]: I0203 10:13:10.772992 5010 scope.go:117] "RemoveContainer" containerID="aafef9981fa7d11562eb0bd58e7300535437ad38c9714ffedb6d952272ad69e5" Feb 03 10:14:16 crc kubenswrapper[5010]: I0203 10:14:16.389632 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:14:16 crc kubenswrapper[5010]: I0203 10:14:16.390190 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:14:46 crc kubenswrapper[5010]: I0203 10:14:46.390303 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:14:46 crc kubenswrapper[5010]: I0203 10:14:46.390804 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:14:54 crc kubenswrapper[5010]: I0203 10:14:54.411968 5010 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176261 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: E0203 10:15:00.176747 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176759 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.176897 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="594e9304-c63f-4d73-bcad-5258c1ebdd6d" containerName="registry" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.177431 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.180009 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.181771 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.183435 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328253 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328326 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.328442 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429354 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.429377 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.430243 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.439881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.449872 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"collect-profiles-29501895-dwjmz\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.495025 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.692539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 10:15:00 crc kubenswrapper[5010]: I0203 10:15:00.784849 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerStarted","Data":"8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e"} Feb 03 10:15:01 crc kubenswrapper[5010]: I0203 10:15:01.793544 5010 generic.go:334] "Generic (PLEG): container finished" podID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerID="73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245" exitCode=0 Feb 03 10:15:01 crc kubenswrapper[5010]: I0203 10:15:01.793612 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerDied","Data":"73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245"} Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.026540 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180550 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180752 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.180889 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") pod \"0eae17d2-2362-4e78-908b-42fcb386ec60\" (UID: \"0eae17d2-2362-4e78-908b-42fcb386ec60\") " Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.182543 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume" (OuterVolumeSpecName: "config-volume") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.188596 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.188964 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv" (OuterVolumeSpecName: "kube-api-access-p24nv") pod "0eae17d2-2362-4e78-908b-42fcb386ec60" (UID: "0eae17d2-2362-4e78-908b-42fcb386ec60"). InnerVolumeSpecName "kube-api-access-p24nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283629 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0eae17d2-2362-4e78-908b-42fcb386ec60-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283690 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0eae17d2-2362-4e78-908b-42fcb386ec60-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.283702 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24nv\" (UniqueName: \"kubernetes.io/projected/0eae17d2-2362-4e78-908b-42fcb386ec60-kube-api-access-p24nv\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.806977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" event={"ID":"0eae17d2-2362-4e78-908b-42fcb386ec60","Type":"ContainerDied","Data":"8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e"} Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.807030 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d324f3579b6cb0c90918bb39a82d082a4c75658003709ed685fa0043f912d2e" Feb 03 10:15:03 crc kubenswrapper[5010]: I0203 10:15:03.807124 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.555256 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:11 crc kubenswrapper[5010]: E0203 10:15:11.556837 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.556917 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.557064 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" containerName="collect-profiles" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.557904 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.566740 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.683828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.684248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.684353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786175 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786508 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.786659 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.787204 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.787398 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.807449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"certified-operators-krrwt\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:11 crc kubenswrapper[5010]: I0203 10:15:11.874230 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.115617 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849432 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" exitCode=0 Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849484 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c"} Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.849733 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"2b643800be8a8e15452559b1220b173cfe0c49c5dc8916864c4c014b46512dcd"} Feb 03 10:15:12 crc kubenswrapper[5010]: I0203 10:15:12.851090 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:15:13 crc kubenswrapper[5010]: I0203 10:15:13.860503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} Feb 03 10:15:14 crc kubenswrapper[5010]: I0203 10:15:14.867409 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" exitCode=0 Feb 03 10:15:14 crc kubenswrapper[5010]: I0203 10:15:14.867732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} Feb 03 10:15:15 crc kubenswrapper[5010]: I0203 10:15:15.875038 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerStarted","Data":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} Feb 03 10:15:15 crc kubenswrapper[5010]: I0203 10:15:15.896950 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-krrwt" podStartSLOduration=2.490613839 podStartE2EDuration="4.896930696s" podCreationTimestamp="2026-02-03 10:15:11 +0000 UTC" firstStartedPulling="2026-02-03 10:15:12.850665747 +0000 UTC m=+783.006641876" lastFinishedPulling="2026-02-03 10:15:15.256982604 +0000 UTC m=+785.412958733" observedRunningTime="2026-02-03 10:15:15.894699171 +0000 UTC m=+786.050675300" watchObservedRunningTime="2026-02-03 10:15:15.896930696 +0000 UTC m=+786.052906825" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.390621 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.391195 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.391286 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.392203 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.392354 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" gracePeriod=600 Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885093 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" exitCode=0 Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885195 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d"} Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} Feb 03 10:15:16 crc kubenswrapper[5010]: I0203 10:15:16.885335 5010 scope.go:117] "RemoveContainer" containerID="7590c7f71cb1479ef753f84e11bac9c523014434d96f673572f6202b5d5157c6" Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.940715 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.944051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:18 crc kubenswrapper[5010]: I0203 10:15:18.955903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.090601 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.091003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.091035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193117 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.193408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.194040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.194116 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.217004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"redhat-marketplace-h5jw9\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.267559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.496691 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.911904 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" exitCode=0 Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.911983 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386"} Feb 03 10:15:19 crc kubenswrapper[5010]: I0203 10:15:19.912012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerStarted","Data":"886a6e84902e3d168c9afbd1fdc0db0df45cb54090864e42049678385ba60527"} Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.874370 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.874460 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.922421 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:21 crc kubenswrapper[5010]: I0203 10:15:21.968509 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:22 crc kubenswrapper[5010]: I0203 10:15:22.933820 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" exitCode=0 Feb 03 10:15:22 crc kubenswrapper[5010]: I0203 10:15:22.933940 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559"} Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.127364 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.941580 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerStarted","Data":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.941670 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-krrwt" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" containerID="cri-o://b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" gracePeriod=2 Feb 03 10:15:23 crc kubenswrapper[5010]: I0203 10:15:23.964667 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h5jw9" podStartSLOduration=2.488907111 podStartE2EDuration="5.964646735s" podCreationTimestamp="2026-02-03 10:15:18 +0000 UTC" firstStartedPulling="2026-02-03 10:15:19.913403379 +0000 UTC m=+790.069379508" lastFinishedPulling="2026-02-03 10:15:23.389143003 +0000 UTC m=+793.545119132" observedRunningTime="2026-02-03 10:15:23.964340407 +0000 UTC m=+794.120316546" watchObservedRunningTime="2026-02-03 10:15:23.964646735 +0000 UTC m=+794.120622864" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.267802 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459304 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.459346 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") pod \"237c1de5-296b-44bc-91d7-c22e7c476939\" (UID: \"237c1de5-296b-44bc-91d7-c22e7c476939\") " Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.460319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities" (OuterVolumeSpecName: "utilities") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.465649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv" (OuterVolumeSpecName: "kube-api-access-sbtwv") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "kube-api-access-sbtwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.560260 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbtwv\" (UniqueName: \"kubernetes.io/projected/237c1de5-296b-44bc-91d7-c22e7c476939-kube-api-access-sbtwv\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.560293 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.574456 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "237c1de5-296b-44bc-91d7-c22e7c476939" (UID: "237c1de5-296b-44bc-91d7-c22e7c476939"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.662564 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/237c1de5-296b-44bc-91d7-c22e7c476939-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950091 5010 generic.go:334] "Generic (PLEG): container finished" podID="237c1de5-296b-44bc-91d7-c22e7c476939" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" exitCode=0 Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950128 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-krrwt" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950149 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950248 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-krrwt" event={"ID":"237c1de5-296b-44bc-91d7-c22e7c476939","Type":"ContainerDied","Data":"2b643800be8a8e15452559b1220b173cfe0c49c5dc8916864c4c014b46512dcd"} Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.950281 5010 scope.go:117] "RemoveContainer" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.967542 5010 scope.go:117] "RemoveContainer" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.988614 5010 scope.go:117] "RemoveContainer" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.988763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:24 crc kubenswrapper[5010]: I0203 10:15:24.990269 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-krrwt"] Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.020565 5010 scope.go:117] "RemoveContainer" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.021118 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": container with ID starting with b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1 not found: ID does not exist" containerID="b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.021238 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1"} err="failed to get container status \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": rpc error: code = NotFound desc = could not find container \"b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1\": container with ID starting with b83077626279b5dff6ec0ae227bd81ef409977581297527a0bdc8ddb9fc2afb1 not found: ID does not exist" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.021333 5010 scope.go:117] "RemoveContainer" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.022692 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": container with ID starting with 76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7 not found: ID does not exist" containerID="76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.022719 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7"} err="failed to get container status \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": rpc error: code = NotFound desc = could not find container \"76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7\": container with ID starting with 76e314709270ca219606fa9e7365adf198797d39f37685c2a5767d9f7b45fca7 not found: ID does not exist" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.022736 5010 scope.go:117] "RemoveContainer" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:25 crc kubenswrapper[5010]: E0203 10:15:25.023152 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": container with ID starting with d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c not found: ID does not exist" containerID="d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c" Feb 03 10:15:25 crc kubenswrapper[5010]: I0203 10:15:25.023201 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c"} err="failed to get container status \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": rpc error: code = NotFound desc = could not find container \"d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c\": container with ID starting with d3ab4e92e5996b7fa02f99acb8c39257d71c3a1a272930f96f8b06ae29dee06c not found: ID does not exist" Feb 03 10:15:26 crc kubenswrapper[5010]: I0203 10:15:26.508895 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" path="/var/lib/kubelet/pods/237c1de5-296b-44bc-91d7-c22e7c476939/volumes" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.268458 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.268817 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:29 crc kubenswrapper[5010]: I0203 10:15:29.308300 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:30 crc kubenswrapper[5010]: I0203 10:15:30.015899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:30 crc kubenswrapper[5010]: I0203 10:15:30.056306 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:31 crc kubenswrapper[5010]: I0203 10:15:31.986693 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h5jw9" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" containerID="cri-o://1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" gracePeriod=2 Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.372787 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.560728 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.561374 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.561494 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") pod \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\" (UID: \"2d5ec45e-19ce-4629-a3e8-66e3053a1649\") " Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.562618 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities" (OuterVolumeSpecName: "utilities") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.569755 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9" (OuterVolumeSpecName: "kube-api-access-76lr9") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "kube-api-access-76lr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.589503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d5ec45e-19ce-4629-a3e8-66e3053a1649" (UID: "2d5ec45e-19ce-4629-a3e8-66e3053a1649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663196 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lr9\" (UniqueName: \"kubernetes.io/projected/2d5ec45e-19ce-4629-a3e8-66e3053a1649-kube-api-access-76lr9\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663361 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.663392 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d5ec45e-19ce-4629-a3e8-66e3053a1649-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993611 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" exitCode=0 Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993672 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5jw9" Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993676 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993845 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5jw9" event={"ID":"2d5ec45e-19ce-4629-a3e8-66e3053a1649","Type":"ContainerDied","Data":"886a6e84902e3d168c9afbd1fdc0db0df45cb54090864e42049678385ba60527"} Feb 03 10:15:32 crc kubenswrapper[5010]: I0203 10:15:32.993881 5010 scope.go:117] "RemoveContainer" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.011348 5010 scope.go:117] "RemoveContainer" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.037663 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.039192 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5jw9"] Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.044851 5010 scope.go:117] "RemoveContainer" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.067695 5010 scope.go:117] "RemoveContainer" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.068299 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": container with ID starting with 1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63 not found: ID does not exist" containerID="1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068345 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63"} err="failed to get container status \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": rpc error: code = NotFound desc = could not find container \"1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63\": container with ID starting with 1c32725b0c68717a4502e6d8f5e370a370dd2132c38d4508966518861419ef63 not found: ID does not exist" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068366 5010 scope.go:117] "RemoveContainer" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.068715 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": container with ID starting with f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559 not found: ID does not exist" containerID="f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068748 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559"} err="failed to get container status \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": rpc error: code = NotFound desc = could not find container \"f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559\": container with ID starting with f485fbfbe73afe60190f2ee61a871aa2a88727244c98bffb3c96901dddc71559 not found: ID does not exist" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.068764 5010 scope.go:117] "RemoveContainer" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: E0203 10:15:33.069064 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": container with ID starting with f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386 not found: ID does not exist" containerID="f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386" Feb 03 10:15:33 crc kubenswrapper[5010]: I0203 10:15:33.069090 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386"} err="failed to get container status \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": rpc error: code = NotFound desc = could not find container \"f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386\": container with ID starting with f476da553dd3185056d6cb30158a1a71f539fd0830528640dea4259b97612386 not found: ID does not exist" Feb 03 10:15:34 crc kubenswrapper[5010]: I0203 10:15:34.510041 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" path="/var/lib/kubelet/pods/2d5ec45e-19ce-4629-a3e8-66e3053a1649/volumes" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.972567 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973471 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973490 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973504 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973511 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973521 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973528 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973542 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973550 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="extract-content" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973561 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973568 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: E0203 10:16:47.973577 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973587 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="extract-utilities" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973701 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5ec45e-19ce-4629-a3e8-66e3053a1649" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.973718 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="237c1de5-296b-44bc-91d7-c22e7c476939" containerName="registry-server" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.974243 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.975994 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.976200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jztx5" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.976907 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.978521 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.979155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.981303 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bmdtr" Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.989145 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:47 crc kubenswrapper[5010]: I0203 10:16:47.998130 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.002264 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.003765 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.006165 5010 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2vtp2" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.016839 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031548 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031605 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.031774 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133558 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.133624 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.154844 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgtb\" (UniqueName: \"kubernetes.io/projected/7746ae6f-d9a0-4bba-a7bc-4920ed478ff4-kube-api-access-lcgtb\") pod \"cert-manager-858654f9db-wtwpn\" (UID: \"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4\") " pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.154977 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvcm\" (UniqueName: \"kubernetes.io/projected/26bf0193-c1b8-4018-a7e4-4429a4292dfb-kube-api-access-zvvcm\") pod \"cert-manager-webhook-687f57d79b-bfc2c\" (UID: \"26bf0193-c1b8-4018-a7e4-4429a4292dfb\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.157774 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcw44\" (UniqueName: \"kubernetes.io/projected/b9d02d93-3df5-4e4a-99b3-07329087dc2c-kube-api-access-wcw44\") pod \"cert-manager-cainjector-cf98fcc89-b5ngd\" (UID: \"b9d02d93-3df5-4e4a-99b3-07329087dc2c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.299422 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.313560 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-wtwpn" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.324581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.510621 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd"] Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.561946 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-wtwpn"] Feb 03 10:16:48 crc kubenswrapper[5010]: W0203 10:16:48.564101 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7746ae6f_d9a0_4bba_a7bc_4920ed478ff4.slice/crio-1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012 WatchSource:0}: Error finding container 1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012: Status 404 returned error can't find the container with id 1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012 Feb 03 10:16:48 crc kubenswrapper[5010]: I0203 10:16:48.784198 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bfc2c"] Feb 03 10:16:48 crc kubenswrapper[5010]: W0203 10:16:48.786740 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26bf0193_c1b8_4018_a7e4_4429a4292dfb.slice/crio-a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a WatchSource:0}: Error finding container a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a: Status 404 returned error can't find the container with id a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.408323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wtwpn" event={"ID":"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4","Type":"ContainerStarted","Data":"1a6b2fff6c2c877f9faaba2e7766850766fc6b249d477de0cfa169d4e843e012"} Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.410880 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" event={"ID":"26bf0193-c1b8-4018-a7e4-4429a4292dfb","Type":"ContainerStarted","Data":"a29dd4f000f1c35a47352aaab15731442e114bbaa34a4c67674d2948fb1a296a"} Feb 03 10:16:49 crc kubenswrapper[5010]: I0203 10:16:49.412710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" event={"ID":"b9d02d93-3df5-4e4a-99b3-07329087dc2c","Type":"ContainerStarted","Data":"2cddbddc0228cef92a4671f6daa25b6d3b74e64583cf8aa6c4e62bacce552dbc"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.437807 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" event={"ID":"b9d02d93-3df5-4e4a-99b3-07329087dc2c","Type":"ContainerStarted","Data":"436ff1c500f0d5f50c199f3323f28bb5ed29b2ccdcc4fdd70509225c7c1e56c3"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.439931 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-wtwpn" event={"ID":"7746ae6f-d9a0-4bba-a7bc-4920ed478ff4","Type":"ContainerStarted","Data":"0bf8d1d6cf91e2f16e9cad3a294971e83cd58c3cd0109b077649ab3f47ecd540"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.441275 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" event={"ID":"26bf0193-c1b8-4018-a7e4-4429a4292dfb","Type":"ContainerStarted","Data":"c71972428c6cfe55c1f0ecb7037993e0707efe5fe272aecb60ca9f4cecaee590"} Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.441410 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.458300 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-b5ngd" podStartSLOduration=2.722558656 podStartE2EDuration="6.458281197s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.518449594 +0000 UTC m=+878.674425723" lastFinishedPulling="2026-02-03 10:16:52.254172135 +0000 UTC m=+882.410148264" observedRunningTime="2026-02-03 10:16:53.451619142 +0000 UTC m=+883.607595311" watchObservedRunningTime="2026-02-03 10:16:53.458281197 +0000 UTC m=+883.614257336" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.480036 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-wtwpn" podStartSLOduration=2.783576072 podStartE2EDuration="6.480018857s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.565969974 +0000 UTC m=+878.721946103" lastFinishedPulling="2026-02-03 10:16:52.262412749 +0000 UTC m=+882.418388888" observedRunningTime="2026-02-03 10:16:53.477607687 +0000 UTC m=+883.633583826" watchObservedRunningTime="2026-02-03 10:16:53.480018857 +0000 UTC m=+883.635994986" Feb 03 10:16:53 crc kubenswrapper[5010]: I0203 10:16:53.495516 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" podStartSLOduration=3.022764832 podStartE2EDuration="6.495492971s" podCreationTimestamp="2026-02-03 10:16:47 +0000 UTC" firstStartedPulling="2026-02-03 10:16:48.78886788 +0000 UTC m=+878.944844009" lastFinishedPulling="2026-02-03 10:16:52.261596019 +0000 UTC m=+882.417572148" observedRunningTime="2026-02-03 10:16:53.492239501 +0000 UTC m=+883.648215630" watchObservedRunningTime="2026-02-03 10:16:53.495492971 +0000 UTC m=+883.651469120" Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757160 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757802 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" containerID="cri-o://f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757855 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" containerID="cri-o://6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757938 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" containerID="cri-o://76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757976 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.757964 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" containerID="cri-o://1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.758018 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" containerID="cri-o://8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.758153 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" containerID="cri-o://24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: I0203 10:16:56.795407 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" containerID="cri-o://bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" gracePeriod=30 Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.905657 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.906095 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.912483 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.912530 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914102 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914130 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914183 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 10:16:56 crc kubenswrapper[5010]: E0203 10:16:56.914196 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.099341 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.101715 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-acl-logging/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.102152 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-controller/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.102553 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155156 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dx6zw"] Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155393 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kubecfg-setup" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kubecfg-setup" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155426 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155435 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155442 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155451 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155457 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155465 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155471 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155788 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155902 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155918 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155925 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155940 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155946 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155963 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155971 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.155992 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.155998 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.156012 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156019 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.156036 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156042 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156836 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156881 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156908 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156919 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-acl-logging" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156932 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="sbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156946 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovn-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156958 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156966 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="northd" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156976 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="kube-rbac-proxy-node" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.156984 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="nbdb" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.157651 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.157676 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerName="ovnkube-controller" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.165544 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256672 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256804 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256844 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256845 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256885 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash" (OuterVolumeSpecName: "host-slash") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256889 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256846 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.256959 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257027 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257045 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257076 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257091 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257120 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257142 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257159 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257180 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257200 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257262 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257282 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257300 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") pod \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\" (UID: \"afbb630a-0dee-4c9c-90ff-cb710b9da3f2\") " Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257482 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257527 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257586 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257600 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257616 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257635 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257659 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257676 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257714 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257742 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257757 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257775 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257795 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257832 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257854 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257043 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257087 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257934 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257980 5010 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257995 5010 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258006 5010 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258016 5010 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258027 5010 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258039 5010 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258050 5010 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258060 5010 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258071 5010 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257110 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket" (OuterVolumeSpecName: "log-socket") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.257134 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258603 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.258927 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259202 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259257 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259282 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log" (OuterVolumeSpecName: "node-log") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.259301 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.264481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.264842 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz" (OuterVolumeSpecName: "kube-api-access-2xwzz") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "kube-api-access-2xwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.273031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "afbb630a-0dee-4c9c-90ff-cb710b9da3f2" (UID: "afbb630a-0dee-4c9c-90ff-cb710b9da3f2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359247 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359270 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359283 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-etc-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359291 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359343 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-slash\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-log-socket\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-netns\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359421 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359443 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359464 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359512 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359568 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359612 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359656 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359739 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359761 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359785 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359835 5010 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359849 5010 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359862 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xwzz\" (UniqueName: \"kubernetes.io/projected/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-kube-api-access-2xwzz\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359874 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359886 5010 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359898 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359911 5010 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359923 5010 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359935 5010 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359945 5010 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359956 5010 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afbb630a-0dee-4c9c-90ff-cb710b9da3f2-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359988 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-systemd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360028 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-run-ovn\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.359349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-bin\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-var-lib-openvswitch\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-systemd-units\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360428 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-node-log\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360454 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-kubelet\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-cni-netd\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.360560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/44b9089e-c580-4353-9e4b-04a3a270e59f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361013 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-env-overrides\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361234 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-config\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.361303 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/44b9089e-c580-4353-9e4b-04a3a270e59f-ovnkube-script-lib\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.364360 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/44b9089e-c580-4353-9e4b-04a3a270e59f-ovn-node-metrics-cert\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.377695 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtnq\" (UniqueName: \"kubernetes.io/projected/44b9089e-c580-4353-9e4b-04a3a270e59f-kube-api-access-pmtnq\") pod \"ovnkube-node-dx6zw\" (UID: \"44b9089e-c580-4353-9e4b-04a3a270e59f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464494 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/2.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464918 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/1.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.464957 5010 generic.go:334] "Generic (PLEG): container finished" podID="8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef" containerID="350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4" exitCode=2 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465014 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerDied","Data":"350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465047 5010 scope.go:117] "RemoveContainer" containerID="d974f1823bf410f5d846407d5b464b8c46ac4e2c4c6677553a1772b55a598ebe" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.465495 5010 scope.go:117] "RemoveContainer" containerID="350b279aaf7efa7dad21bc0c20fa082b7c655a83b208a5091e614ce3efe34ce4" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.468738 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovnkube-controller/3.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.473826 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-acl-logging/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.474396 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68p7p_afbb630a-0dee-4c9c-90ff-cb710b9da3f2/ovn-controller/0.log" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475692 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475725 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475738 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475747 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475741 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475795 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475800 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476008 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476027 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.475757 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476071 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" exitCode=0 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476090 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" exitCode=143 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476100 5010 generic.go:334] "Generic (PLEG): container finished" podID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" exitCode=143 Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476205 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476241 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476248 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476255 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476458 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476467 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476474 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476481 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476488 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476495 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476511 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476526 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476536 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476544 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476551 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476558 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476564 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476571 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476577 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476586 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476593 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476605 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476618 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476626 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476633 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476639 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476646 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476653 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476660 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476667 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476682 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476690 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476700 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68p7p" event={"ID":"afbb630a-0dee-4c9c-90ff-cb710b9da3f2","Type":"ContainerDied","Data":"397d6ad2bb41a4df9c0dc30fd14d52b9e67cbf17ccd52dacef60dc2182647ba3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476715 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476724 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476732 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476740 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476747 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476754 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476761 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476768 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476775 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.476782 5010 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.488975 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.509636 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.516712 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.521647 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68p7p"] Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.533704 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.561101 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.577914 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.598015 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.612197 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.626428 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.700979 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.715659 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.751326 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.765904 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766260 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766305 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766343 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766642 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766673 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766698 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.766915 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766943 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.766962 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767134 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767153 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767166 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767380 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767406 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767420 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767649 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767670 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767683 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.767874 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767895 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.767910 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768062 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768082 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768094 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768294 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768314 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768325 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: E0203 10:16:57.768573 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768628 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768668 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768897 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.768916 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769104 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769130 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769417 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769445 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769652 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769674 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769863 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.769896 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770073 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770091 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770301 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770329 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770511 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770528 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770704 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770729 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.770926 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771115 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771142 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771347 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771366 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771588 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771623 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.771872 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772070 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772093 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772274 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772298 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772499 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772518 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772702 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772727 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772925 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.772946 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773156 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773183 5010 scope.go:117] "RemoveContainer" containerID="bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773444 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a"} err="failed to get container status \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": rpc error: code = NotFound desc = could not find container \"bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a\": container with ID starting with bfdf455fec0761ed4f56e2b27304fc0f214b7525beb9984c17273cf2058d315a not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773465 5010 scope.go:117] "RemoveContainer" containerID="ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773634 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db"} err="failed to get container status \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": rpc error: code = NotFound desc = could not find container \"ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db\": container with ID starting with ac00156071db044c5a1bd15eb95ed6c9889183e3b066401ab66cb111b78a40db not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773654 5010 scope.go:117] "RemoveContainer" containerID="1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773834 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e"} err="failed to get container status \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": rpc error: code = NotFound desc = could not find container \"1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e\": container with ID starting with 1e7546a24120ccfd93cf394070712de1562e217c7210923d7a70748a27e7749e not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.773853 5010 scope.go:117] "RemoveContainer" containerID="6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774010 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7"} err="failed to get container status \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": rpc error: code = NotFound desc = could not find container \"6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7\": container with ID starting with 6a8e8d22af39629be91527ab836c40c27dcd60e1fdc0b19933239627087680b7 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774027 5010 scope.go:117] "RemoveContainer" containerID="24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774182 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b"} err="failed to get container status \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": rpc error: code = NotFound desc = could not find container \"24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b\": container with ID starting with 24fb52b0a881955ea3449a150f513ac628722623f9f0b5e0ff8f355ad4ee7a3b not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774199 5010 scope.go:117] "RemoveContainer" containerID="12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774527 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919"} err="failed to get container status \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": rpc error: code = NotFound desc = could not find container \"12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919\": container with ID starting with 12b183600c5c07964a434ca7cd0cf0c1312931989e8b2d733df3701f56200919 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.774547 5010 scope.go:117] "RemoveContainer" containerID="76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775109 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3"} err="failed to get container status \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": rpc error: code = NotFound desc = could not find container \"76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3\": container with ID starting with 76edcd13b649425c37acc166a132b9f9fbd01a276aeb2afa4b100db4cf8fe8d3 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775141 5010 scope.go:117] "RemoveContainer" containerID="8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775380 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142"} err="failed to get container status \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": rpc error: code = NotFound desc = could not find container \"8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142\": container with ID starting with 8490466c9b3178bafef4b5f496c39fb7b20ae251f9aee046b5deee92abb50142 not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775410 5010 scope.go:117] "RemoveContainer" containerID="f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775627 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf"} err="failed to get container status \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": rpc error: code = NotFound desc = could not find container \"f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf\": container with ID starting with f70a75335dff9d9ba8620ff0b31da6d39e9a83523883c663cf73f75b148230cf not found: ID does not exist" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775648 5010 scope.go:117] "RemoveContainer" containerID="5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53" Feb 03 10:16:57 crc kubenswrapper[5010]: I0203 10:16:57.775849 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53"} err="failed to get container status \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": rpc error: code = NotFound desc = could not find container \"5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53\": container with ID starting with 5ca0afc026f9cc6526c90dc1a5f469598043a0444ae73c7e64acea19ceb64f53 not found: ID does not exist" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.326640 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bfc2c" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.482956 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f5tpq_8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef/kube-multus/2.log" Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.483064 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f5tpq" event={"ID":"8b16bcfb-db8c-4fbe-98f3-2d6c5353cfef","Type":"ContainerStarted","Data":"572bea666e8d94e55589ce0ee754fcd331cf7f3eb1bcbaf5139a1e8bb58fe555"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488453 5010 generic.go:334] "Generic (PLEG): container finished" podID="44b9089e-c580-4353-9e4b-04a3a270e59f" containerID="fa2da3302ee5fa1d268ceb3a598a189ac7d6e299c97d6dee08f81aa1fb56eb01" exitCode=0 Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerDied","Data":"fa2da3302ee5fa1d268ceb3a598a189ac7d6e299c97d6dee08f81aa1fb56eb01"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.488566 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"1fa76f159b9c052306233546e5e3cd8d81de34f2a2da7a289615528f73058fbe"} Feb 03 10:16:58 crc kubenswrapper[5010]: I0203 10:16:58.510431 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbb630a-0dee-4c9c-90ff-cb710b9da3f2" path="/var/lib/kubelet/pods/afbb630a-0dee-4c9c-90ff-cb710b9da3f2/volumes" Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498420 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"bca9e630bba9adf10225d1b40d115a3b086a1ff3fdd142b899c35dff3f4a914d"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498746 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"92d72cb0f194ae589805d49dd0b68ceec7415daabad163e2247ec0a73716dc5c"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"4a12e96fb4ee57376c3d040f69a13b895c682aed4f5028634335c518c51c8f0c"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498780 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"b79f479f381497cc6d07c190dea9414670d2433fe7906dd0f406042adace4073"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498789 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"cca2e8459522efb134428e3e9d01437c0c1225119fa23540fa5134fad3cb23f8"} Feb 03 10:16:59 crc kubenswrapper[5010]: I0203 10:16:59.498798 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"117d2f3555d10e53e86cfbaa4ed8c90b1e5a3f5dec1921952630fad01f344b5e"} Feb 03 10:17:01 crc kubenswrapper[5010]: I0203 10:17:01.510910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"b32863f8ff6cb7f7f2e794c0b071138811c9d86d4893ef7d9c37067a9f430006"} Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.532442 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" event={"ID":"44b9089e-c580-4353-9e4b-04a3a270e59f","Type":"ContainerStarted","Data":"56cb1f51ac5d26eb76ba983dc58bfc8b2bed77b234f386c5830199051d68ed79"} Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533024 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.533051 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.568131 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" podStartSLOduration=7.568109369 podStartE2EDuration="7.568109369s" podCreationTimestamp="2026-02-03 10:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:17:04.564451928 +0000 UTC m=+894.720428067" watchObservedRunningTime="2026-02-03 10:17:04.568109369 +0000 UTC m=+894.724085498" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.575133 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:04 crc kubenswrapper[5010]: I0203 10:17:04.582074 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:16 crc kubenswrapper[5010]: I0203 10:17:16.390429 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:17:16 crc kubenswrapper[5010]: I0203 10:17:16.391263 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:17:27 crc kubenswrapper[5010]: I0203 10:17:27.514033 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dx6zw" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.104714 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.106139 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.108630 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.116508 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204409 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204480 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.204507 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.305998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.306869 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.307023 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.327329 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.423652 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.589305 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl"] Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.731427 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerStarted","Data":"4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04"} Feb 03 10:17:39 crc kubenswrapper[5010]: I0203 10:17:39.731745 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerStarted","Data":"b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4"} Feb 03 10:17:40 crc kubenswrapper[5010]: I0203 10:17:40.737782 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04" exitCode=0 Feb 03 10:17:40 crc kubenswrapper[5010]: I0203 10:17:40.737819 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"4ed2d000e5539e4f0f00f339331ba7863091489a20723c71752d5bc5ce0e5a04"} Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.411497 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.413411 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.424750 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531136 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531202 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.531290 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632098 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632146 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.632174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.633663 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.633781 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.665805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"redhat-operators-jw95h\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.733337 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:41 crc kubenswrapper[5010]: I0203 10:17:41.996474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:17:42 crc kubenswrapper[5010]: W0203 10:17:42.012040 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda595e8ea_8e1d_44c1_9ee0_0e40fa3a0f96.slice/crio-60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852 WatchSource:0}: Error finding container 60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852: Status 404 returned error can't find the container with id 60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750254 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" exitCode=0 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750340 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9"} Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.750633 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852"} Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.752965 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="77236826d76411acd09f4b6acbc2cbab98aaaed6120d41840fe09cf196c2066a" exitCode=0 Feb 03 10:17:42 crc kubenswrapper[5010]: I0203 10:17:42.753073 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"77236826d76411acd09f4b6acbc2cbab98aaaed6120d41840fe09cf196c2066a"} Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.759543 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.764918 5010 generic.go:334] "Generic (PLEG): container finished" podID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerID="288db0e960f4e0f01e04dc94840da4564bc08e4cfd6ccbf106dfad7054926599" exitCode=0 Feb 03 10:17:43 crc kubenswrapper[5010]: I0203 10:17:43.765182 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"288db0e960f4e0f01e04dc94840da4564bc08e4cfd6ccbf106dfad7054926599"} Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.559593 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.755703 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.755993 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.756051 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") pod \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\" (UID: \"a64fc313-0bcd-40df-a19f-052eb0d1ce8a\") " Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.762800 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle" (OuterVolumeSpecName: "bundle") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784416 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" event={"ID":"a64fc313-0bcd-40df-a19f-052eb0d1ce8a","Type":"ContainerDied","Data":"b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4"} Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784489 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c5e242439c1a925e9e8a69b8c937e6e81018435fb3186bd47eec8937e184d4" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.784588 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.787992 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util" (OuterVolumeSpecName: "util") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.857082 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.857117 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.940933 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9" (OuterVolumeSpecName: "kube-api-access-5ljf9") pod "a64fc313-0bcd-40df-a19f-052eb0d1ce8a" (UID: "a64fc313-0bcd-40df-a19f-052eb0d1ce8a"). InnerVolumeSpecName "kube-api-access-5ljf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:17:45 crc kubenswrapper[5010]: I0203 10:17:45.958531 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ljf9\" (UniqueName: \"kubernetes.io/projected/a64fc313-0bcd-40df-a19f-052eb0d1ce8a-kube-api-access-5ljf9\") on node \"crc\" DevicePath \"\"" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.389907 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.389977 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.790673 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" exitCode=0 Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.790711 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.813467 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.814748 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.814874 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.814951 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="pull" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815003 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="pull" Feb 03 10:17:46 crc kubenswrapper[5010]: E0203 10:17:46.815086 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="util" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="util" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.815347 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64fc313-0bcd-40df-a19f-052eb0d1ce8a" containerName="extract" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.816358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.823328 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972512 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972556 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:46 crc kubenswrapper[5010]: I0203 10:17:46.972579 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073572 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073621 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.073657 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.074465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.074513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.092141 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"community-operators-dk2xz\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.135676 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.762326 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:17:47 crc kubenswrapper[5010]: W0203 10:17:47.768783 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae42090_f4be_43c8_b0b1_90fe576195a3.slice/crio-a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8 WatchSource:0}: Error finding container a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8: Status 404 returned error can't find the container with id a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8 Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.799644 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8"} Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.802820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerStarted","Data":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} Feb 03 10:17:47 crc kubenswrapper[5010]: I0203 10:17:47.839162 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jw95h" podStartSLOduration=2.277285675 podStartE2EDuration="6.839147197s" podCreationTimestamp="2026-02-03 10:17:41 +0000 UTC" firstStartedPulling="2026-02-03 10:17:42.752302224 +0000 UTC m=+932.908278353" lastFinishedPulling="2026-02-03 10:17:47.314163746 +0000 UTC m=+937.470139875" observedRunningTime="2026-02-03 10:17:47.837496134 +0000 UTC m=+937.993472273" watchObservedRunningTime="2026-02-03 10:17:47.839147197 +0000 UTC m=+937.995123326" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.196691 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.197635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.199660 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.199797 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fwd79" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.200044 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.256300 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.292843 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.393586 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.411275 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-827bf\" (UniqueName: \"kubernetes.io/projected/e5c85e5b-ab19-414d-97e6-767b9e01f731-kube-api-access-827bf\") pod \"nmstate-operator-646758c888-frs8s\" (UID: \"e5c85e5b-ab19-414d-97e6-767b9e01f731\") " pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.553531 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.800653 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-frs8s"] Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.823288 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" exitCode=0 Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.823364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d"} Feb 03 10:17:48 crc kubenswrapper[5010]: I0203 10:17:48.824358 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" event={"ID":"e5c85e5b-ab19-414d-97e6-767b9e01f731","Type":"ContainerStarted","Data":"5908b98cc9c4e8e06b25a0ee20e6cc49102e6a6e209fbb852ae959a901689b23"} Feb 03 10:17:50 crc kubenswrapper[5010]: I0203 10:17:50.835988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.733784 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.734453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.868968 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" exitCode=0 Feb 03 10:17:51 crc kubenswrapper[5010]: I0203 10:17:51.869042 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.855379 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw95h" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" probeResult="failure" output=< Feb 03 10:17:52 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:17:52 crc kubenswrapper[5010]: > Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.878189 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerStarted","Data":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.880051 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" event={"ID":"e5c85e5b-ab19-414d-97e6-767b9e01f731","Type":"ContainerStarted","Data":"231f510af2241efaa85d823418b2221940ce2782889b8739d680d24932992e4c"} Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.907896 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dk2xz" podStartSLOduration=3.317217381 podStartE2EDuration="6.907878953s" podCreationTimestamp="2026-02-03 10:17:46 +0000 UTC" firstStartedPulling="2026-02-03 10:17:48.825839113 +0000 UTC m=+938.981815242" lastFinishedPulling="2026-02-03 10:17:52.416500685 +0000 UTC m=+942.572476814" observedRunningTime="2026-02-03 10:17:52.901859588 +0000 UTC m=+943.057835717" watchObservedRunningTime="2026-02-03 10:17:52.907878953 +0000 UTC m=+943.063855082" Feb 03 10:17:52 crc kubenswrapper[5010]: I0203 10:17:52.922633 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-frs8s" podStartSLOduration=1.830706026 podStartE2EDuration="4.922613221s" podCreationTimestamp="2026-02-03 10:17:48 +0000 UTC" firstStartedPulling="2026-02-03 10:17:48.818935845 +0000 UTC m=+938.974911984" lastFinishedPulling="2026-02-03 10:17:51.91084305 +0000 UTC m=+942.066819179" observedRunningTime="2026-02-03 10:17:52.919463871 +0000 UTC m=+943.075440020" watchObservedRunningTime="2026-02-03 10:17:52.922613221 +0000 UTC m=+943.078589350" Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.938396 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.939416 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:53 crc kubenswrapper[5010]: I0203 10:17:53.945585 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-h8tpr" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.018375 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.115021 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.167033 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-55jg2"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.168076 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.189604 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.190285 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.193712 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.327241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.336071 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.428849 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.428899 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429065 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429129 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.429179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530886 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-dbus-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.530997 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531085 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531157 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531175 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531416 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-ovs-socket\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.531434 5010 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.531456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d47b696a-a1d0-4389-a099-7f375ab72f8c-nmstate-lock\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.531503 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair podName:1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a nodeName:}" failed. No retries permitted until 2026-02-03 10:17:55.031476455 +0000 UTC m=+945.187452584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-2xtg6" (UID: "1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a") : secret "openshift-nmstate-webhook" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.629993 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skncx\" (UniqueName: \"kubernetes.io/projected/552fa369-352c-4690-aa39-f0364021feae-kube-api-access-skncx\") pod \"nmstate-metrics-54757c584b-hl7ls\" (UID: \"552fa369-352c-4690-aa39-f0364021feae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.634925 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zdt\" (UniqueName: \"kubernetes.io/projected/d47b696a-a1d0-4389-a099-7f375ab72f8c-kube-api-access-22zdt\") pod \"nmstate-handler-55jg2\" (UID: \"d47b696a-a1d0-4389-a099-7f375ab72f8c\") " pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.635450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72ppl\" (UniqueName: \"kubernetes.io/projected/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-kube-api-access-72ppl\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.639193 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.705433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.706548 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.711465 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hgx6j" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.719346 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.719654 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.724737 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736247 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736309 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.736446 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.793257 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:17:54 crc kubenswrapper[5010]: W0203 10:17:54.825019 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd47b696a_a1d0_4389_a099_7f375ab72f8c.slice/crio-0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7 WatchSource:0}: Error finding container 0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7: Status 404 returned error can't find the container with id 0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7 Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837273 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.837373 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.837558 5010 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 10:17:54 crc kubenswrapper[5010]: E0203 10:17:54.837618 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert podName:a09e0456-1529-4ece-9266-d02a283d6bd1 nodeName:}" failed. No retries permitted until 2026-02-03 10:17:55.337599856 +0000 UTC m=+945.493575995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-npjjg" (UID: "a09e0456-1529-4ece-9266-d02a283d6bd1") : secret "plugin-serving-cert" not found Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.838320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a09e0456-1529-4ece-9266-d02a283d6bd1-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.856916 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tgz\" (UniqueName: \"kubernetes.io/projected/a09e0456-1529-4ece-9266-d02a283d6bd1-kube-api-access-l4tgz\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.890298 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-55jg2" event={"ID":"d47b696a-a1d0-4389-a099-7f375ab72f8c","Type":"ContainerStarted","Data":"0018ace989cd238398395805035c6036e6d60f23cd14e853f7e6eed50bcba7d7"} Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.905027 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.905920 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946645 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946740 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946808 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946826 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.946962 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.947010 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.947074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:54 crc kubenswrapper[5010]: I0203 10:17:54.948471 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048449 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048515 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048559 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048694 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.048757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-trusted-ca-bundle\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050624 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-oauth-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050630 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-console-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.050912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c407954-b971-4641-b466-882aecfa452d-service-ca\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.053898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-serving-cert\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.053916 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7c407954-b971-4641-b466-882aecfa452d-console-oauth-config\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.054538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-2xtg6\" (UID: \"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.067969 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8gd\" (UniqueName: \"kubernetes.io/projected/7c407954-b971-4641-b466-882aecfa452d-kube-api-access-zm8gd\") pod \"console-85556757c-xgtrl\" (UID: \"7c407954-b971-4641-b466-882aecfa452d\") " pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.237895 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.274663 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.354594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.359513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a09e0456-1529-4ece-9266-d02a283d6bd1-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-npjjg\" (UID: \"a09e0456-1529-4ece-9266-d02a283d6bd1\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.565878 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.745677 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hl7ls"] Feb 03 10:17:55 crc kubenswrapper[5010]: W0203 10:17:55.787400 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552fa369_352c_4690_aa39_f0364021feae.slice/crio-279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044 WatchSource:0}: Error finding container 279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044: Status 404 returned error can't find the container with id 279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044 Feb 03 10:17:55 crc kubenswrapper[5010]: I0203 10:17:55.960715 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"279091e0bf5fa5d7da037200c2d0b459b254335a9a3782229b5ef8f286367044"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.006320 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85556757c-xgtrl"] Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.111402 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg"] Feb 03 10:17:56 crc kubenswrapper[5010]: W0203 10:17:56.124617 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda09e0456_1529_4ece_9266_d02a283d6bd1.slice/crio-e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81 WatchSource:0}: Error finding container e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81: Status 404 returned error can't find the container with id e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81 Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.283260 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6"] Feb 03 10:17:56 crc kubenswrapper[5010]: W0203 10:17:56.289560 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1336bbfa_f4c5_4e35_9b48_d0e8df8f3e7a.slice/crio-3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc WatchSource:0}: Error finding container 3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc: Status 404 returned error can't find the container with id 3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.968100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85556757c-xgtrl" event={"ID":"7c407954-b971-4641-b466-882aecfa452d","Type":"ContainerStarted","Data":"7ef5324afbb31210395ef76208265ecaecefc136478d6f66e32869e8c859cd89"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.968284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85556757c-xgtrl" event={"ID":"7c407954-b971-4641-b466-882aecfa452d","Type":"ContainerStarted","Data":"934148e529d4479274f5172ee1c039b370951c343ba6b0480f971775fe9fa002"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.970093 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" event={"ID":"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a","Type":"ContainerStarted","Data":"3ff4cb229308fbcddb94c59da343ba1bb478794881e8a1acafbe1c8a840438bc"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.971501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" event={"ID":"a09e0456-1529-4ece-9266-d02a283d6bd1","Type":"ContainerStarted","Data":"e7acd10e9541fdf5180baea8b3e92f4170102e8831c35308def7d9c0999d2c81"} Feb 03 10:17:56 crc kubenswrapper[5010]: I0203 10:17:56.993916 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85556757c-xgtrl" podStartSLOduration=2.993866775 podStartE2EDuration="2.993866775s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:17:56.993400693 +0000 UTC m=+947.149376932" watchObservedRunningTime="2026-02-03 10:17:56.993866775 +0000 UTC m=+947.149842914" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.136852 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.136911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:57 crc kubenswrapper[5010]: I0203 10:17:57.181144 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:58 crc kubenswrapper[5010]: I0203 10:17:58.054253 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:17:58 crc kubenswrapper[5010]: I0203 10:17:58.097345 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.005863 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dk2xz" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" containerID="cri-o://42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" gracePeriod=2 Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.773070 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910358 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.910739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") pod \"aae42090-f4be-43c8-b0b1-90fe576195a3\" (UID: \"aae42090-f4be-43c8-b0b1-90fe576195a3\") " Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.911439 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities" (OuterVolumeSpecName: "utilities") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.922448 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm" (OuterVolumeSpecName: "kube-api-access-rjwsm") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "kube-api-access-rjwsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:00 crc kubenswrapper[5010]: I0203 10:18:00.967726 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aae42090-f4be-43c8-b0b1-90fe576195a3" (UID: "aae42090-f4be-43c8-b0b1-90fe576195a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012181 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012205 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae42090-f4be-43c8-b0b1-90fe576195a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.012229 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwsm\" (UniqueName: \"kubernetes.io/projected/aae42090-f4be-43c8-b0b1-90fe576195a3-kube-api-access-rjwsm\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020829 5010 generic.go:334] "Generic (PLEG): container finished" podID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" exitCode=0 Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020909 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk2xz" event={"ID":"aae42090-f4be-43c8-b0b1-90fe576195a3","Type":"ContainerDied","Data":"a8bec8e2b56c771c7079c4cac54a1acdfd8e585a247992ddbbfe6031d2222fb8"} Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.020931 5010 scope.go:117] "RemoveContainer" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.021086 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk2xz" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.055308 5010 scope.go:117] "RemoveContainer" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.056306 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.061738 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dk2xz"] Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.099774 5010 scope.go:117] "RemoveContainer" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.120276 5010 scope.go:117] "RemoveContainer" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.123368 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": container with ID starting with 42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5 not found: ID does not exist" containerID="42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123406 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5"} err="failed to get container status \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": rpc error: code = NotFound desc = could not find container \"42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5\": container with ID starting with 42a5679f2bd4fd1564b513dc66e4c7a7acdf5afe4e21f98a3de4359c04b642d5 not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123432 5010 scope.go:117] "RemoveContainer" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.123806 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": container with ID starting with 646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769 not found: ID does not exist" containerID="646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123833 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769"} err="failed to get container status \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": rpc error: code = NotFound desc = could not find container \"646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769\": container with ID starting with 646c66b8f94cfde5c6d8883c2c7e71e6bb79c1b3b31a40c92dea00ebb09f1769 not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.123850 5010 scope.go:117] "RemoveContainer" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: E0203 10:18:01.125358 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": container with ID starting with 5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d not found: ID does not exist" containerID="5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.125391 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d"} err="failed to get container status \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": rpc error: code = NotFound desc = could not find container \"5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d\": container with ID starting with 5c382ebad5e62922e5ab93ec93d495f5875cfe47f60ced4a82342b11f3962e8d not found: ID does not exist" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.799845 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:01 crc kubenswrapper[5010]: I0203 10:18:01.852947 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.031188 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" event={"ID":"1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a","Type":"ContainerStarted","Data":"914f343841c6c49951d3f9e532eaff729c8c8a12f3dd90eb117eb0a5db2a5799"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.031858 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.035422 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-55jg2" event={"ID":"d47b696a-a1d0-4389-a099-7f375ab72f8c","Type":"ContainerStarted","Data":"bd46edd0bf6b0328b0b416fd6991b88ce38b9657e6b4984ab8015caf312909ad"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.035641 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.037333 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"8eeea6bb6655951282cb8cc5b8e5aa47576a34145ef0e0a35843a13a66dfaef7"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.039333 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" event={"ID":"a09e0456-1529-4ece-9266-d02a283d6bd1","Type":"ContainerStarted","Data":"e0199070d116252057e18c698125ba1e46cd9c4a0ceacf81e9ba2c6be88888a7"} Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.051188 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" podStartSLOduration=3.34070146 podStartE2EDuration="8.051169438s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:56.292809112 +0000 UTC m=+946.448785251" lastFinishedPulling="2026-02-03 10:18:01.0032771 +0000 UTC m=+951.159253229" observedRunningTime="2026-02-03 10:18:02.049838134 +0000 UTC m=+952.205814293" watchObservedRunningTime="2026-02-03 10:18:02.051169438 +0000 UTC m=+952.207145567" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.070948 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-npjjg" podStartSLOduration=3.207698345 podStartE2EDuration="8.070924695s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:56.127495827 +0000 UTC m=+946.283471956" lastFinishedPulling="2026-02-03 10:18:00.990722177 +0000 UTC m=+951.146698306" observedRunningTime="2026-02-03 10:18:02.067102047 +0000 UTC m=+952.223078176" watchObservedRunningTime="2026-02-03 10:18:02.070924695 +0000 UTC m=+952.226900824" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.095274 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-55jg2" podStartSLOduration=1.9212982840000001 podStartE2EDuration="8.09524052s" podCreationTimestamp="2026-02-03 10:17:54 +0000 UTC" firstStartedPulling="2026-02-03 10:17:54.828629895 +0000 UTC m=+944.984606024" lastFinishedPulling="2026-02-03 10:18:01.002572131 +0000 UTC m=+951.158548260" observedRunningTime="2026-02-03 10:18:02.092242933 +0000 UTC m=+952.248219072" watchObservedRunningTime="2026-02-03 10:18:02.09524052 +0000 UTC m=+952.251216649" Feb 03 10:18:02 crc kubenswrapper[5010]: I0203 10:18:02.530623 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" path="/var/lib/kubelet/pods/aae42090-f4be-43c8-b0b1-90fe576195a3/volumes" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.004451 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.045886 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jw95h" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" containerID="cri-o://c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" gracePeriod=2 Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.788731 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.880569 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") pod \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\" (UID: \"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96\") " Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.881652 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities" (OuterVolumeSpecName: "utilities") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.888426 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z" (OuterVolumeSpecName: "kube-api-access-4q58z") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "kube-api-access-4q58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.982384 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q58z\" (UniqueName: \"kubernetes.io/projected/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-kube-api-access-4q58z\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.982759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:03 crc kubenswrapper[5010]: I0203 10:18:03.999121 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" (UID: "a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.052303 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" event={"ID":"552fa369-352c-4690-aa39-f0364021feae","Type":"ContainerStarted","Data":"6c580a63487f5ce48ebe5fe9ebbd7d8e657990d0e8338ef14f54796ae9c62b21"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053544 5010 generic.go:334] "Generic (PLEG): container finished" podID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" exitCode=0 Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053570 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053592 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw95h" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw95h" event={"ID":"a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96","Type":"ContainerDied","Data":"60580599bfa6e867910c3854625eecb82cba759cc65d13303775a63e7e0ee852"} Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.053642 5010 scope.go:117] "RemoveContainer" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.070139 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hl7ls" podStartSLOduration=3.099355845 podStartE2EDuration="11.070119561s" podCreationTimestamp="2026-02-03 10:17:53 +0000 UTC" firstStartedPulling="2026-02-03 10:17:55.791988282 +0000 UTC m=+945.947964411" lastFinishedPulling="2026-02-03 10:18:03.762751998 +0000 UTC m=+953.918728127" observedRunningTime="2026-02-03 10:18:04.069441103 +0000 UTC m=+954.225417242" watchObservedRunningTime="2026-02-03 10:18:04.070119561 +0000 UTC m=+954.226095700" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.086899 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.094602 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.095076 5010 scope.go:117] "RemoveContainer" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.100007 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jw95h"] Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.113663 5010 scope.go:117] "RemoveContainer" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127412 5010 scope.go:117] "RemoveContainer" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.127815 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": container with ID starting with c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981 not found: ID does not exist" containerID="c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981"} err="failed to get container status \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": rpc error: code = NotFound desc = could not find container \"c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981\": container with ID starting with c0e54b73e6b5b107c61c7d815c3b36fe1b46587e120a837fe789a5cfb5b00981 not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.127863 5010 scope.go:117] "RemoveContainer" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.128208 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": container with ID starting with 3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c not found: ID does not exist" containerID="3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128256 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c"} err="failed to get container status \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": rpc error: code = NotFound desc = could not find container \"3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c\": container with ID starting with 3233b7a84639e8da2f401885f649b9998961cd9522c1b313c054b9fc5b07696c not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128275 5010 scope.go:117] "RemoveContainer" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: E0203 10:18:04.128635 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": container with ID starting with 443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9 not found: ID does not exist" containerID="443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.128705 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9"} err="failed to get container status \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": rpc error: code = NotFound desc = could not find container \"443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9\": container with ID starting with 443709295bdaac31497a6cc77ad2bcc3071794d791e0635c510f6ba7c30b30a9 not found: ID does not exist" Feb 03 10:18:04 crc kubenswrapper[5010]: I0203 10:18:04.509288 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" path="/var/lib/kubelet/pods/a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96/volumes" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.275339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.275391 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:05 crc kubenswrapper[5010]: I0203 10:18:05.280701 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:06 crc kubenswrapper[5010]: I0203 10:18:06.071018 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85556757c-xgtrl" Feb 03 10:18:06 crc kubenswrapper[5010]: I0203 10:18:06.126772 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:09 crc kubenswrapper[5010]: I0203 10:18:09.817825 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-55jg2" Feb 03 10:18:15 crc kubenswrapper[5010]: I0203 10:18:15.245528 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-2xtg6" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.389891 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.390280 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.390353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.391168 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:18:16 crc kubenswrapper[5010]: I0203 10:18:16.391247 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" gracePeriod=600 Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.124767 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" exitCode=0 Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.124832 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0"} Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.125064 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} Feb 03 10:18:17 crc kubenswrapper[5010]: I0203 10:18:17.125083 5010 scope.go:117] "RemoveContainer" containerID="8680190c062bea3a65ab9dd9a4d956ebc68c414b2e8a2f0c41a9c5b1c0cfad9d" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.709479 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710184 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710199 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710211 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710286 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710295 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-content" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710317 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710325 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710334 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710341 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="extract-utilities" Feb 03 10:18:27 crc kubenswrapper[5010]: E0203 10:18:27.710354 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710361 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710486 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a595e8ea-8e1d-44c1-9ee0-0e40fa3a0f96" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.710501 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae42090-f4be-43c8-b0b1-90fe576195a3" containerName="registry-server" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.711375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.713598 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.718801 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815165 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815291 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.815314 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.916973 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917072 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.917745 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.918418 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:27 crc kubenswrapper[5010]: I0203 10:18:27.940857 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:28 crc kubenswrapper[5010]: I0203 10:18:28.031036 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:28 crc kubenswrapper[5010]: I0203 10:18:28.421278 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz"] Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207088 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="a3cd82fc92cf61c5f18a09e764f5dd61187286d6b948cfb9d63c617df319c44e" exitCode=0 Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207149 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"a3cd82fc92cf61c5f18a09e764f5dd61187286d6b948cfb9d63c617df319c44e"} Feb 03 10:18:29 crc kubenswrapper[5010]: I0203 10:18:29.207181 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerStarted","Data":"14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807"} Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.166762 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" containerID="cri-o://f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" gracePeriod=15 Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.226600 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="762723c8c7f4f28f6095a48162888e71c936fac571db0915653fe6246dcf24e0" exitCode=0 Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.226651 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"762723c8c7f4f28f6095a48162888e71c936fac571db0915653fe6246dcf24e0"} Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.602183 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wtcpj_61f7221f-b9e1-45bc-8a9e-2f512c9e457d/console/0.log" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.602279 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.772042 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773019 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config" (OuterVolumeSpecName: "console-config") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773471 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773549 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773599 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773725 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.773765 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") pod \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\" (UID: \"61f7221f-b9e1-45bc-8a9e-2f512c9e457d\") " Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774418 5010 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774441 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.774854 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca" (OuterVolumeSpecName: "service-ca") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.779031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg" (OuterVolumeSpecName: "kube-api-access-kfwvg") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "kube-api-access-kfwvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.779909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.784255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "61f7221f-b9e1-45bc-8a9e-2f512c9e457d" (UID: "61f7221f-b9e1-45bc-8a9e-2f512c9e457d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875368 5010 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875400 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwvg\" (UniqueName: \"kubernetes.io/projected/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-kube-api-access-kfwvg\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875410 5010 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875420 5010 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875429 5010 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:31 crc kubenswrapper[5010]: I0203 10:18:31.875439 5010 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61f7221f-b9e1-45bc-8a9e-2f512c9e457d-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235093 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wtcpj_61f7221f-b9e1-45bc-8a9e-2f512c9e457d/console/0.log" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235187 5010 generic.go:334] "Generic (PLEG): container finished" podID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" exitCode=2 Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235299 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wtcpj" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235326 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerDied","Data":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wtcpj" event={"ID":"61f7221f-b9e1-45bc-8a9e-2f512c9e457d","Type":"ContainerDied","Data":"e28ff007b543d7700a90a71c76b34e3da1bf25749689935b2de9d5cc48606a37"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.235440 5010 scope.go:117] "RemoveContainer" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.239887 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerID="7da7195b6792681ec21c8254b8f2e079622d47ffe69d268a6a9e6c70dbadbff6" exitCode=0 Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.239942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"7da7195b6792681ec21c8254b8f2e079622d47ffe69d268a6a9e6c70dbadbff6"} Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.256045 5010 scope.go:117] "RemoveContainer" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: E0203 10:18:32.256503 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": container with ID starting with f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa not found: ID does not exist" containerID="f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.256572 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa"} err="failed to get container status \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": rpc error: code = NotFound desc = could not find container \"f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa\": container with ID starting with f89a159604342113cfd798b38a41427642e3dbe1086be857d2aac704265d43aa not found: ID does not exist" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.274965 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.280105 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wtcpj"] Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.523483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" path="/var/lib/kubelet/pods/61f7221f-b9e1-45bc-8a9e-2f512c9e457d/volumes" Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.580067 5010 patch_prober.go:28] interesting pod/console-f9d7485db-wtcpj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 10:18:32 crc kubenswrapper[5010]: I0203 10:18:32.580184 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-wtcpj" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.490377 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.595949 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.596679 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.596740 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") pod \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\" (UID: \"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119\") " Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.598743 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle" (OuterVolumeSpecName: "bundle") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.600587 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl" (OuterVolumeSpecName: "kube-api-access-hwctl") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "kube-api-access-hwctl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.612142 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util" (OuterVolumeSpecName: "util") pod "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" (UID: "bad8c1c1-8f3a-45e1-a3c4-fa197d93d119"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697845 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697876 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwctl\" (UniqueName: \"kubernetes.io/projected/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-kube-api-access-hwctl\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:33 crc kubenswrapper[5010]: I0203 10:18:33.697890 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bad8c1c1-8f3a-45e1-a3c4-fa197d93d119-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.254930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" event={"ID":"bad8c1c1-8f3a-45e1-a3c4-fa197d93d119","Type":"ContainerDied","Data":"14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807"} Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.254966 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bed4434d2304991aed20b7bafe268c89811d3d8bf20fc4eded5ec1946a7807" Feb 03 10:18:34 crc kubenswrapper[5010]: I0203 10:18:34.255030 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.468820 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469670 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469690 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469724 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469732 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469745 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="pull" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469752 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="pull" Feb 03 10:18:43 crc kubenswrapper[5010]: E0203 10:18:43.469765 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="util" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469771 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="util" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469894 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f7221f-b9e1-45bc-8a9e-2f512c9e457d" containerName="console" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.469918 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad8c1c1-8f3a-45e1-a3c4-fa197d93d119" containerName="extract" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.470399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.475421 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477470 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477692 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9sxcg" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.477778 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.489324 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.626912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.627007 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.627055 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.728783 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.729487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.729574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.734912 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-apiservice-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.745128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ec28393-ea76-4413-a903-612126368291-webhook-cert\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.765060 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:43 crc kubenswrapper[5010]: I0203 10:18:43.792171 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqdx\" (UniqueName: \"kubernetes.io/projected/5ec28393-ea76-4413-a903-612126368291-kube-api-access-bgqdx\") pod \"metallb-operator-controller-manager-76d7f7cd57-dncnc\" (UID: \"5ec28393-ea76-4413-a903-612126368291\") " pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.071173 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.072469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.075387 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.075941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.079470 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jxsgn" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.090573 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.092323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.274641 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.274990 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.275051 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.375985 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.376049 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.376097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.385092 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-webhook-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.397627 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f33c9-1c81-4b74-a905-71aed9ecf222-apiservice-cert\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.403971 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8v9\" (UniqueName: \"kubernetes.io/projected/d90f33c9-1c81-4b74-a905-71aed9ecf222-kube-api-access-bd8v9\") pod \"metallb-operator-webhook-server-5b857c8d44-88x9l\" (UID: \"d90f33c9-1c81-4b74-a905-71aed9ecf222\") " pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.655457 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc"] Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.688199 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:44 crc kubenswrapper[5010]: I0203 10:18:44.945793 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l"] Feb 03 10:18:44 crc kubenswrapper[5010]: W0203 10:18:44.951333 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90f33c9_1c81_4b74_a905_71aed9ecf222.slice/crio-2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d WatchSource:0}: Error finding container 2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d: Status 404 returned error can't find the container with id 2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d Feb 03 10:18:45 crc kubenswrapper[5010]: I0203 10:18:45.333639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" event={"ID":"d90f33c9-1c81-4b74-a905-71aed9ecf222","Type":"ContainerStarted","Data":"2ce7775edf5a531a3e3b4029ab154de0bbfd3152c770357d92c60d9f1883030d"} Feb 03 10:18:45 crc kubenswrapper[5010]: I0203 10:18:45.334858 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" event={"ID":"5ec28393-ea76-4413-a903-612126368291","Type":"ContainerStarted","Data":"d19d2b325111314fc861c760f2b9cb42288c25df075dbc6b00ae442830b75f6f"} Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.357887 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" event={"ID":"5ec28393-ea76-4413-a903-612126368291","Type":"ContainerStarted","Data":"137317201a6cf8d3a21d714dc3ffe84540e77add914f23bebf6c6570d6b3191a"} Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.358472 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:18:48 crc kubenswrapper[5010]: I0203 10:18:48.384600 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" podStartSLOduration=2.461425962 podStartE2EDuration="5.384574673s" podCreationTimestamp="2026-02-03 10:18:43 +0000 UTC" firstStartedPulling="2026-02-03 10:18:44.663275716 +0000 UTC m=+994.819251845" lastFinishedPulling="2026-02-03 10:18:47.586424427 +0000 UTC m=+997.742400556" observedRunningTime="2026-02-03 10:18:48.380447987 +0000 UTC m=+998.536424126" watchObservedRunningTime="2026-02-03 10:18:48.384574673 +0000 UTC m=+998.540550802" Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.375592 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" event={"ID":"d90f33c9-1c81-4b74-a905-71aed9ecf222","Type":"ContainerStarted","Data":"c05cac75c128c1602ab7126d8350064fe25bdf02927bbdfc0099644847764635"} Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.375954 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:18:50 crc kubenswrapper[5010]: I0203 10:18:50.398394 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" podStartSLOduration=1.930403944 podStartE2EDuration="6.398369243s" podCreationTimestamp="2026-02-03 10:18:44 +0000 UTC" firstStartedPulling="2026-02-03 10:18:44.954523525 +0000 UTC m=+995.110499654" lastFinishedPulling="2026-02-03 10:18:49.422488814 +0000 UTC m=+999.578464953" observedRunningTime="2026-02-03 10:18:50.395086019 +0000 UTC m=+1000.551062158" watchObservedRunningTime="2026-02-03 10:18:50.398369243 +0000 UTC m=+1000.554345382" Feb 03 10:19:04 crc kubenswrapper[5010]: I0203 10:19:04.694185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b857c8d44-88x9l" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.094284 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76d7f7cd57-dncnc" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.913957 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2lwr2"] Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.916948 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922633 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922866 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.922970 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-22lgm" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.934785 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.936154 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.940621 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 10:19:24 crc kubenswrapper[5010]: I0203 10:19:24.965256 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.051800 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-mlsql"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.052891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.055412 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.056092 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.057405 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.057842 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wg7nb" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.069751 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.070572 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.073488 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077390 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077418 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077439 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077491 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.077551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.098315 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178976 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.178998 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179019 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179059 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179124 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179148 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179170 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179207 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179279 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179306 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.179334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.180256 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-startup\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181568 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-conf\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181625 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-reloader\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.181892 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-frr-sockets\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.188973 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-metrics-certs\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.189598 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.201172 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k6c\" (UniqueName: \"kubernetes.io/projected/f6ea4a71-2a4d-48cd-9dda-ba453a1c8766-kube-api-access-78k6c\") pod \"frr-k8s-webhook-server-7df86c4f6c-dbqxw\" (UID: \"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.221512 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxx7s\" (UniqueName: \"kubernetes.io/projected/4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5-kube-api-access-qxx7s\") pod \"frr-k8s-2lwr2\" (UID: \"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5\") " pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.257375 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.270554 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.279963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280011 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280068 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280107 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280149 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280174 5010 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280264 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs podName:19f856e9-2325-41eb-8ed3-4daff562e84a nodeName:}" failed. No retries permitted until 2026-02-03 10:19:25.780240561 +0000 UTC m=+1035.936216690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs") pod "controller-6968d8fdc4-lpqgh" (UID: "19f856e9-2325-41eb-8ed3-4daff562e84a") : secret "controller-certs-secret" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.280286 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280296 5010 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.280330 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist podName:72e88a76-8c59-4d07-813e-d7d505d14c3b nodeName:}" failed. No retries permitted until 2026-02-03 10:19:25.780317483 +0000 UTC m=+1035.936293622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist") pod "speaker-mlsql" (UID: "72e88a76-8c59-4d07-813e-d7d505d14c3b") : secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.281048 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/72e88a76-8c59-4d07-813e-d7d505d14c3b-metallb-excludel2\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.282794 5010 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.284618 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-metrics-certs\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.294301 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-cert\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.295493 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllv6\" (UniqueName: \"kubernetes.io/projected/72e88a76-8c59-4d07-813e-d7d505d14c3b-kube-api-access-wllv6\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.299935 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwwz\" (UniqueName: \"kubernetes.io/projected/19f856e9-2325-41eb-8ed3-4daff562e84a-kube-api-access-wjwwz\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.788172 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.788248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.788348 5010 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: E0203 10:19:25.788421 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist podName:72e88a76-8c59-4d07-813e-d7d505d14c3b nodeName:}" failed. No retries permitted until 2026-02-03 10:19:26.78840272 +0000 UTC m=+1036.944378849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist") pod "speaker-mlsql" (UID: "72e88a76-8c59-4d07-813e-d7d505d14c3b") : secret "metallb-memberlist" not found Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.793999 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/19f856e9-2325-41eb-8ed3-4daff562e84a-metrics-certs\") pod \"controller-6968d8fdc4-lpqgh\" (UID: \"19f856e9-2325-41eb-8ed3-4daff562e84a\") " pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.794441 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw"] Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.912491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" event={"ID":"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766","Type":"ContainerStarted","Data":"a253028265a27ce0e11b3e3849e1a3ac3e9fde42fef061c1469257b50049e5a7"} Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.913391 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"01453a2818ff94a8915f3e81e8de25511c89e4a9454eb648bd0e2f7af01cbae7"} Feb 03 10:19:25 crc kubenswrapper[5010]: I0203 10:19:25.986095 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.201537 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lpqgh"] Feb 03 10:19:26 crc kubenswrapper[5010]: W0203 10:19:26.210917 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f856e9_2325_41eb_8ed3_4daff562e84a.slice/crio-9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d WatchSource:0}: Error finding container 9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d: Status 404 returned error can't find the container with id 9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.802880 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.807324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/72e88a76-8c59-4d07-813e-d7d505d14c3b-memberlist\") pod \"speaker-mlsql\" (UID: \"72e88a76-8c59-4d07-813e-d7d505d14c3b\") " pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.872682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlsql" Feb 03 10:19:26 crc kubenswrapper[5010]: W0203 10:19:26.895082 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e88a76_8c59_4d07_813e_d7d505d14c3b.slice/crio-3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21 WatchSource:0}: Error finding container 3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21: Status 404 returned error can't find the container with id 3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21 Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"f38b71a25ab14fa3e82a7778ddbb4430e03d64c773dc23f472818e0dff2e79a9"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"f390d4927b128ff0cf6da15910b38388dbd985cf3049fd4c3f7a4e7957c17c12"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920186 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920204 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lpqgh" event={"ID":"19f856e9-2325-41eb-8ed3-4daff562e84a","Type":"ContainerStarted","Data":"9df4bf419d874cabf3eae1eaa610220c77222c7130a1f4414a4518089d6f716d"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.920870 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"3485d30491a5e697838728824aeec50d9a29751e88e9143f609c70084c0bbf21"} Feb 03 10:19:26 crc kubenswrapper[5010]: I0203 10:19:26.939824 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lpqgh" podStartSLOduration=1.939804166 podStartE2EDuration="1.939804166s" podCreationTimestamp="2026-02-03 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:19:26.935759262 +0000 UTC m=+1037.091735421" watchObservedRunningTime="2026-02-03 10:19:26.939804166 +0000 UTC m=+1037.095780305" Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978038 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"17ec44bd6f4c15bdda152c97fb08b1b6d4f4ffdce03bf0542268ec3e643b0d0c"} Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978079 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlsql" event={"ID":"72e88a76-8c59-4d07-813e-d7d505d14c3b","Type":"ContainerStarted","Data":"ee2dbe1e9eeca94b7f9b024f99d7761c6b2f63ca3871d8a2c84e4ece5c4a0858"} Feb 03 10:19:27 crc kubenswrapper[5010]: I0203 10:19:27.978103 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-mlsql" Feb 03 10:19:28 crc kubenswrapper[5010]: I0203 10:19:28.006987 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-mlsql" podStartSLOduration=3.006967799 podStartE2EDuration="3.006967799s" podCreationTimestamp="2026-02-03 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:19:28.002205177 +0000 UTC m=+1038.158181326" watchObservedRunningTime="2026-02-03 10:19:28.006967799 +0000 UTC m=+1038.162943928" Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.111548 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="eee945f5cb01663746714448a20c0735d4264b42915d138bc8ea2fe9b67de247" exitCode=0 Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.111986 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"eee945f5cb01663746714448a20c0735d4264b42915d138bc8ea2fe9b67de247"} Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.113993 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" event={"ID":"f6ea4a71-2a4d-48cd-9dda-ba453a1c8766","Type":"ContainerStarted","Data":"ba05f2744a466a2727a76e31377b4993405a89f3d817fb665106d2d3d0aeb271"} Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.114251 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:38 crc kubenswrapper[5010]: I0203 10:19:38.160719 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" podStartSLOduration=2.864725262 podStartE2EDuration="14.160694551s" podCreationTimestamp="2026-02-03 10:19:24 +0000 UTC" firstStartedPulling="2026-02-03 10:19:25.804537354 +0000 UTC m=+1035.960513483" lastFinishedPulling="2026-02-03 10:19:37.100506643 +0000 UTC m=+1047.256482772" observedRunningTime="2026-02-03 10:19:38.154086861 +0000 UTC m=+1048.310063000" watchObservedRunningTime="2026-02-03 10:19:38.160694551 +0000 UTC m=+1048.316670680" Feb 03 10:19:39 crc kubenswrapper[5010]: I0203 10:19:39.121640 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="18368ccf63f783db882a121c7b947b3387b300c8f7a80a947c097d8261fdb770" exitCode=0 Feb 03 10:19:39 crc kubenswrapper[5010]: I0203 10:19:39.121734 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"18368ccf63f783db882a121c7b947b3387b300c8f7a80a947c097d8261fdb770"} Feb 03 10:19:40 crc kubenswrapper[5010]: I0203 10:19:40.128460 5010 generic.go:334] "Generic (PLEG): container finished" podID="4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5" containerID="65eb5b187fb2b621b6369b286c1282184886349f4993b9fb3636ccf8920ff8d6" exitCode=0 Feb 03 10:19:40 crc kubenswrapper[5010]: I0203 10:19:40.128494 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerDied","Data":"65eb5b187fb2b621b6369b286c1282184886349f4993b9fb3636ccf8920ff8d6"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.140769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"3b9afa48db592eccb97b76872b31a36eb379d1c2ce8520af4f37e34f4b660c00"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141106 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141122 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"51c00d5c8ba6e4f4fac73ffaba6f4fcbd46576ac40fc873aa85d9674443a706b"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"71344978daa2db95d6a18fce035d560708c4cd853cc315fb5a314ddb6a5d48b2"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141152 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"3e42db12729f8deeed09fb29d587b16b00967c8c046e2fe546ae400778f92295"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"aa4a1c721811ce88c6727b2b6f1831342546957b48a133194683d6e8edde97a2"} Feb 03 10:19:41 crc kubenswrapper[5010]: I0203 10:19:41.141175 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2lwr2" event={"ID":"4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5","Type":"ContainerStarted","Data":"ed207a36434fbc2b0fdbb09b247f112be66dda6b02a88d08579a4b0cdd47c950"} Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.257587 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.295740 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.319232 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2lwr2" podStartSLOduration=9.712184375 podStartE2EDuration="21.319199901s" podCreationTimestamp="2026-02-03 10:19:24 +0000 UTC" firstStartedPulling="2026-02-03 10:19:25.477950798 +0000 UTC m=+1035.633926927" lastFinishedPulling="2026-02-03 10:19:37.084966324 +0000 UTC m=+1047.240942453" observedRunningTime="2026-02-03 10:19:41.167392072 +0000 UTC m=+1051.323368201" watchObservedRunningTime="2026-02-03 10:19:45.319199901 +0000 UTC m=+1055.475176030" Feb 03 10:19:45 crc kubenswrapper[5010]: I0203 10:19:45.989980 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lpqgh" Feb 03 10:19:46 crc kubenswrapper[5010]: I0203 10:19:46.876147 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-mlsql" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.962785 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.963608 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.966115 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5qw2t" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.966188 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.969407 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 10:19:49 crc kubenswrapper[5010]: I0203 10:19:49.982512 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.154424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.265769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.284081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"openstack-operator-index-58tlq\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.581208 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:50 crc kubenswrapper[5010]: I0203 10:19:50.989648 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:51 crc kubenswrapper[5010]: I0203 10:19:51.199423 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerStarted","Data":"5e677639a6c97370081222296cbb2e0a8d8af6746b719c225659bc34635fbb81"} Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.340289 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.958377 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.959568 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:53 crc kubenswrapper[5010]: I0203 10:19:53.965882 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.015352 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.116978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.142373 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57r2\" (UniqueName: \"kubernetes.io/projected/1e93c0a0-5a7b-40d7-aaee-e31455baf139-kube-api-access-v57r2\") pod \"openstack-operator-index-fv5km\" (UID: \"1e93c0a0-5a7b-40d7-aaee-e31455baf139\") " pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.219322 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerStarted","Data":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.219432 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-58tlq" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" containerID="cri-o://bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" gracePeriod=2 Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.238615 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-58tlq" podStartSLOduration=2.19065542 podStartE2EDuration="5.23859685s" podCreationTimestamp="2026-02-03 10:19:49 +0000 UTC" firstStartedPulling="2026-02-03 10:19:50.998471188 +0000 UTC m=+1061.154447317" lastFinishedPulling="2026-02-03 10:19:54.046412618 +0000 UTC m=+1064.202388747" observedRunningTime="2026-02-03 10:19:54.236556847 +0000 UTC m=+1064.392532976" watchObservedRunningTime="2026-02-03 10:19:54.23859685 +0000 UTC m=+1064.394572979" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.282599 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.513174 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fv5km"] Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.669474 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.826241 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") pod \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\" (UID: \"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1\") " Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.832565 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc" (OuterVolumeSpecName: "kube-api-access-8jzmc") pod "27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" (UID: "27e02f08-a8b7-490f-a26c-2a5aa6af0ad1"). InnerVolumeSpecName "kube-api-access-8jzmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:19:54 crc kubenswrapper[5010]: I0203 10:19:54.927753 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzmc\" (UniqueName: \"kubernetes.io/projected/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1-kube-api-access-8jzmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.226936 5010 generic.go:334] "Generic (PLEG): container finished" podID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" exitCode=0 Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.226988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerDied","Data":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227011 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-58tlq" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227028 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-58tlq" event={"ID":"27e02f08-a8b7-490f-a26c-2a5aa6af0ad1","Type":"ContainerDied","Data":"5e677639a6c97370081222296cbb2e0a8d8af6746b719c225659bc34635fbb81"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.227045 5010 scope.go:117] "RemoveContainer" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.229118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv5km" event={"ID":"1e93c0a0-5a7b-40d7-aaee-e31455baf139","Type":"ContainerStarted","Data":"062ce5e416a0048c6fe820619953bcbc43eac0ccba4550cb07947408bb005877"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.229151 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fv5km" event={"ID":"1e93c0a0-5a7b-40d7-aaee-e31455baf139","Type":"ContainerStarted","Data":"631725c7047fa1106af2d95e1b032c3ec5c9c17ad929d8a7b1babf104903e323"} Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.245371 5010 scope.go:117] "RemoveContainer" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: E0203 10:19:55.246141 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": container with ID starting with bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c not found: ID does not exist" containerID="bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.246234 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c"} err="failed to get container status \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": rpc error: code = NotFound desc = could not find container \"bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c\": container with ID starting with bfde3b37fea1e4aeafc618d315c12cc69aa465f4b311c30ac3b0ddec98c58b7c not found: ID does not exist" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.250778 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fv5km" podStartSLOduration=2.035012019 podStartE2EDuration="2.250754095s" podCreationTimestamp="2026-02-03 10:19:53 +0000 UTC" firstStartedPulling="2026-02-03 10:19:54.525823981 +0000 UTC m=+1064.681800110" lastFinishedPulling="2026-02-03 10:19:54.741566057 +0000 UTC m=+1064.897542186" observedRunningTime="2026-02-03 10:19:55.245148241 +0000 UTC m=+1065.401124390" watchObservedRunningTime="2026-02-03 10:19:55.250754095 +0000 UTC m=+1065.406730254" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.263689 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2lwr2" Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.269162 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.273064 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-58tlq"] Feb 03 10:19:55 crc kubenswrapper[5010]: I0203 10:19:55.274459 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dbqxw" Feb 03 10:19:56 crc kubenswrapper[5010]: I0203 10:19:56.509152 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" path="/var/lib/kubelet/pods/27e02f08-a8b7-490f-a26c-2a5aa6af0ad1/volumes" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.283468 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.283966 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.316651 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:04 crc kubenswrapper[5010]: I0203 10:20:04.346939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fv5km" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.187823 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:06 crc kubenswrapper[5010]: E0203 10:20:06.188384 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.188401 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.188527 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e02f08-a8b7-490f-a26c-2a5aa6af0ad1" containerName="registry-server" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.189350 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.191188 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9977h" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.200157 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.299571 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400508 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400563 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.400606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.401150 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.402778 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.419011 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.532565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:06 crc kubenswrapper[5010]: I0203 10:20:06.935560 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc"] Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312324 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="72abbe53ef303c966dac97295039fd50d30e9f313ab1eb51a686e38c86ad29bf" exitCode=0 Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312415 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"72abbe53ef303c966dac97295039fd50d30e9f313ab1eb51a686e38c86ad29bf"} Feb 03 10:20:07 crc kubenswrapper[5010]: I0203 10:20:07.312674 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerStarted","Data":"7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5"} Feb 03 10:20:08 crc kubenswrapper[5010]: I0203 10:20:08.320399 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="15ba1fb969009e5814cbdceceaf66ba33621a230a92dd50a4bf7e769958bf10f" exitCode=0 Feb 03 10:20:08 crc kubenswrapper[5010]: I0203 10:20:08.320501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"15ba1fb969009e5814cbdceceaf66ba33621a230a92dd50a4bf7e769958bf10f"} Feb 03 10:20:09 crc kubenswrapper[5010]: I0203 10:20:09.330785 5010 generic.go:334] "Generic (PLEG): container finished" podID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerID="94438307668eb53c5f5445f671fd9a1bcebd80dfe6d4f4a5a3e39c52ce3f74fd" exitCode=0 Feb 03 10:20:09 crc kubenswrapper[5010]: I0203 10:20:09.330842 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"94438307668eb53c5f5445f671fd9a1bcebd80dfe6d4f4a5a3e39c52ce3f74fd"} Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.591062 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.658863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659194 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659331 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") pod \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\" (UID: \"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb\") " Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.659707 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle" (OuterVolumeSpecName: "bundle") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.664265 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8" (OuterVolumeSpecName: "kube-api-access-2fcr8") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "kube-api-access-2fcr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.675031 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util" (OuterVolumeSpecName: "util") pod "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" (UID: "878224e8-6bbb-4b7f-9aff-b2bf21eef4bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761242 5010 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761287 5010 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-util\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:10 crc kubenswrapper[5010]: I0203 10:20:10.761297 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fcr8\" (UniqueName: \"kubernetes.io/projected/878224e8-6bbb-4b7f-9aff-b2bf21eef4bb-kube-api-access-2fcr8\") on node \"crc\" DevicePath \"\"" Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.346957 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" event={"ID":"878224e8-6bbb-4b7f-9aff-b2bf21eef4bb","Type":"ContainerDied","Data":"7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5"} Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.346994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc" Feb 03 10:20:11 crc kubenswrapper[5010]: I0203 10:20:11.347007 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d64426a1c5618ac69d74890d5ab09299f87b0d7ca2ece50947215f9f2159ac5" Feb 03 10:20:16 crc kubenswrapper[5010]: I0203 10:20:16.389824 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:20:16 crc kubenswrapper[5010]: I0203 10:20:16.390395 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184180 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184803 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="pull" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184817 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="pull" Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184840 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="util" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184847 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="util" Feb 03 10:20:18 crc kubenswrapper[5010]: E0203 10:20:18.184860 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.184867 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.185003 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="878224e8-6bbb-4b7f-9aff-b2bf21eef4bb" containerName="extract" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.185485 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.189476 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2kgrw" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.227095 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.265592 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.366589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.388447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd89\" (UniqueName: \"kubernetes.io/projected/bde44bc9-c06a-4c2b-aad8-6f3247272024-kube-api-access-pfd89\") pod \"openstack-operator-controller-init-578f994c6c-72ld2\" (UID: \"bde44bc9-c06a-4c2b-aad8-6f3247272024\") " pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.503540 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.982575 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2"] Feb 03 10:20:18 crc kubenswrapper[5010]: I0203 10:20:18.991175 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:20:19 crc kubenswrapper[5010]: I0203 10:20:19.413977 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" event={"ID":"bde44bc9-c06a-4c2b-aad8-6f3247272024","Type":"ContainerStarted","Data":"0bebbf9909ef02daaa1533195d95da469593d888464f80ed7cf687d6aa5f592f"} Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.488571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" event={"ID":"bde44bc9-c06a-4c2b-aad8-6f3247272024","Type":"ContainerStarted","Data":"981b2e22c7badf0ca3652cd4319b877b8391ab2b738289eb3dbf54c4ef99062b"} Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.490131 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:27 crc kubenswrapper[5010]: I0203 10:20:27.523323 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" podStartSLOduration=1.360225965 podStartE2EDuration="9.523301904s" podCreationTimestamp="2026-02-03 10:20:18 +0000 UTC" firstStartedPulling="2026-02-03 10:20:18.990825234 +0000 UTC m=+1089.146801363" lastFinishedPulling="2026-02-03 10:20:27.153901173 +0000 UTC m=+1097.309877302" observedRunningTime="2026-02-03 10:20:27.521129338 +0000 UTC m=+1097.677105467" watchObservedRunningTime="2026-02-03 10:20:27.523301904 +0000 UTC m=+1097.679278033" Feb 03 10:20:38 crc kubenswrapper[5010]: I0203 10:20:38.510521 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-578f994c6c-72ld2" Feb 03 10:20:46 crc kubenswrapper[5010]: I0203 10:20:46.397873 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:20:46 crc kubenswrapper[5010]: I0203 10:20:46.398564 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.086702 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.088378 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.091873 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lvq9v" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.092100 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.093037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.094915 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-x5txp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.097967 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.101554 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.143034 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.143834 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.150593 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wlxnv" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.167304 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.168650 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.177824 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-t2hc2" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.182305 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.198452 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.209474 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.210742 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.213681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8ffcr" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.230664 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.231535 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.235421 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-67qfn" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.244606 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251006 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251090 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.251118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.263289 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.264270 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.273588 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.273681 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qfj78" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.292711 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.293596 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.297244 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-556xw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.305687 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.320050 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.321053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.324242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kk5q5" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.336734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.352922 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.352987 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353023 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353070 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353149 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353195 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.353614 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.361781 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.389130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn9q2\" (UniqueName: \"kubernetes.io/projected/9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe-kube-api-access-nn9q2\") pod \"glance-operator-controller-manager-8886f4c47-gnxws\" (UID: \"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.393382 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.394574 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zg2\" (UniqueName: \"kubernetes.io/projected/fd413d86-2cda-4079-a895-5cb60928a47f-kube-api-access-l6zg2\") pod \"designate-operator-controller-manager-6d9697b7f4-j87lc\" (UID: \"fd413d86-2cda-4079-a895-5cb60928a47f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.396000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b44v\" (UniqueName: \"kubernetes.io/projected/a7d72ea1-7126-4768-9cf8-f590ebd216d7-kube-api-access-2b44v\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-52g72\" (UID: \"a7d72ea1-7126-4768-9cf8-f590ebd216d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.400615 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjvd\" (UniqueName: \"kubernetes.io/projected/74803e29-48a3-4667-bcdb-a94f381545b5-kube-api-access-dmjvd\") pod \"cinder-operator-controller-manager-8d874c8fc-jvb56\" (UID: \"74803e29-48a3-4667-bcdb-a94f381545b5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.402504 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.403201 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.418076 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.418684 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bw698" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.426655 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.426980 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.435350 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.436320 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.444231 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.445072 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mwbcv" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.457560 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459466 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459644 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459797 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.459960 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: E0203 10:20:57.461082 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:57 crc kubenswrapper[5010]: E0203 10:20:57.461148 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:57.961130831 +0000 UTC m=+1128.117106960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.479673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.503336 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.506740 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.510564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfqw\" (UniqueName: \"kubernetes.io/projected/d33dc0fd-847b-41cc-a8ac-afde40120ba2-kube-api-access-khfqw\") pod \"heat-operator-controller-manager-69d6db494d-7szqs\" (UID: \"d33dc0fd-847b-41cc-a8ac-afde40120ba2\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.518368 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmjn\" (UniqueName: \"kubernetes.io/projected/5fafda3f-e0cd-4477-9c10-442af83a835b-kube-api-access-nzmjn\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.537948 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dx96\" (UniqueName: \"kubernetes.io/projected/9dc494bd-d6ef-4a22-8312-67750ebb3dbe-kube-api-access-6dx96\") pod \"horizon-operator-controller-manager-5fb775575f-k765q\" (UID: \"9dc494bd-d6ef-4a22-8312-67750ebb3dbe\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.593940 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.594589 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.595613 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.596786 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598462 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vhk6m" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598614 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598717 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.598751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.601521 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.602731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lr6qh" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.608488 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.609763 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.614241 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dl88t" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.619055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.626979 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vghr\" (UniqueName: \"kubernetes.io/projected/2f204595-5d98-4c16-b5d1-5004c6cae836-kube-api-access-4vghr\") pod \"ironic-operator-controller-manager-5f4b8bd54d-w7ldz\" (UID: \"2f204595-5d98-4c16-b5d1-5004c6cae836\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.629903 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69sw\" (UniqueName: \"kubernetes.io/projected/1a136ea1-ab68-4f60-8fb2-969363f25337-kube-api-access-k69sw\") pod \"keystone-operator-controller-manager-84f48565d4-gb8tp\" (UID: \"1a136ea1-ab68-4f60-8fb2-969363f25337\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.636725 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.645713 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.650077 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.660294 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.662451 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.665135 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bqqr5" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.666282 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.673955 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699617 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699686 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699789 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.699862 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.704142 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.705709 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.708727 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qfx9f" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.718354 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.723612 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.724567 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.734435 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.751950 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753009 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753728 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.753818 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.754402 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.772574 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.772649 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.773549 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.773639 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.799108 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.800346 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.814497 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.935086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lzl2q" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.936973 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mhjhl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.937191 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g7t5t" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.937481 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fbpzm" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938122 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q6hht" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938561 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.938563 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64lls\" (UniqueName: \"kubernetes.io/projected/7f20ca5f-d244-45be-864d-3b8ad3d456ea-kube-api-access-64lls\") pod \"manila-operator-controller-manager-7dd968899f-qrkwl\" (UID: \"7f20ca5f-d244-45be-864d-3b8ad3d456ea\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.941989 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47896\" (UniqueName: \"kubernetes.io/projected/42f76062-3a9d-45c1-b928-d9ca236ec8ab-kube-api-access-47896\") pod \"mariadb-operator-controller-manager-67bf948998-5zbbw\" (UID: \"42f76062-3a9d-45c1-b928-d9ca236ec8ab\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944746 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944792 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944876 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944931 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.944979 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.945029 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.945080 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.974538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfrh\" (UniqueName: \"kubernetes.io/projected/27ab6ab7-e411-466c-bc4a-97d1660c547e-kube-api-access-znfrh\") pod \"octavia-operator-controller-manager-6687f8d877-5lzr6\" (UID: \"27ab6ab7-e411-466c-bc4a-97d1660c547e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.984432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tml\" (UniqueName: \"kubernetes.io/projected/21f46dec-fb01-4293-ad08-706eb63a8738-kube-api-access-26tml\") pod \"nova-operator-controller-manager-55bff696bd-t47jc\" (UID: \"21f46dec-fb01-4293-ad08-706eb63a8738\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:57 crc kubenswrapper[5010]: I0203 10:20:57.984499 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mblb\" (UniqueName: \"kubernetes.io/projected/4f112d60-8db7-4ec2-a82d-c7627ade05a3-kube-api-access-5mblb\") pod \"neutron-operator-controller-manager-585dbc889-pwdks\" (UID: \"4f112d60-8db7-4ec2-a82d-c7627ade05a3\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.072721 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.073476 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077174 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-frpdt" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077596 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.077700 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.089567 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104488 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104554 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104586 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104684 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104819 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.104849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105817 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105872 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:20:58.605855126 +0000 UTC m=+1128.761831255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105878 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.105952 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.105928908 +0000 UTC m=+1129.261905067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.416181 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.417515 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.419039 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420364 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420451 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420506 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420787 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.420851 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.421880 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.422944 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.445639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgrh\" (UniqueName: \"kubernetes.io/projected/3e47047f-9303-47e2-8312-c83315e1a3ff-kube-api-access-pvgrh\") pod \"ovn-operator-controller-manager-788c46999f-g8qz8\" (UID: \"3e47047f-9303-47e2-8312-c83315e1a3ff\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.453307 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.458708 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcfh\" (UniqueName: \"kubernetes.io/projected/76bde002-75f6-4c4a-af3d-16aec5a221f4-kube-api-access-djcfh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.489229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdn5\" (UniqueName: \"kubernetes.io/projected/e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58-kube-api-access-rvdn5\") pod \"telemetry-operator-controller-manager-64b5b76f97-ck5g7\" (UID: \"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.677438 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9djc\" (UniqueName: \"kubernetes.io/projected/84af1f21-c29e-4846-9ce1-ea345cbad4fc-kube-api-access-l9djc\") pod \"swift-operator-controller-manager-68fc8c869-mrvfq\" (UID: \"84af1f21-c29e-4846-9ce1-ea345cbad4fc\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.689587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.690910 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.690964 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.691023 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697815 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697915 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.697961 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.697945771 +0000 UTC m=+1129.853921900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.698090 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.698140 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.198114215 +0000 UTC m=+1129.354090344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: E0203 10:20:58.703163 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:20:59.203140424 +0000 UTC m=+1129.359116553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.726456 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6j7f\" (UniqueName: \"kubernetes.io/projected/8251c193-3c53-4651-87da-8b216cf907aa-kube-api-access-r6j7f\") pod \"placement-operator-controller-manager-5b964cf4cd-d99mj\" (UID: \"8251c193-3c53-4651-87da-8b216cf907aa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.733176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldlv\" (UniqueName: \"kubernetes.io/projected/37a4f3fa-bbaf-433d-9835-6ac576351651-kube-api-access-bldlv\") pod \"watcher-operator-controller-manager-564965969-ftqqr\" (UID: \"37a4f3fa-bbaf-433d-9835-6ac576351651\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.741733 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gl62\" (UniqueName: \"kubernetes.io/projected/a62d6669-692b-4909-b192-4348ac82a50d-kube-api-access-5gl62\") pod \"test-operator-controller-manager-56f8bfcd9f-pgwx2\" (UID: \"a62d6669-692b-4909-b192-4348ac82a50d\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.744488 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv2g\" (UniqueName: \"kubernetes.io/projected/54aaeb1d-8a23-413f-b1f4-5115b167d78b-kube-api-access-7dv2g\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.744869 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:20:58 crc kubenswrapper[5010]: I0203 10:20:58.903008 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.108930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.113589 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.114358 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.114449 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.116472 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jlf56" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.120321 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.120395 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:01.120377122 +0000 UTC m=+1131.276353251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.143571 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.160113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.173050 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210169 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210304 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.210337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.210439 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.210503 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:00.210483114 +0000 UTC m=+1130.366459243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.211895 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.211930 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:00.211920151 +0000 UTC m=+1130.367896280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.346746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.498588 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zwx\" (UniqueName: \"kubernetes.io/projected/2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a-kube-api-access-c2zwx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kj7mj\" (UID: \"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.577206 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.700019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.701431 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: E0203 10:20:59.701473 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:01.701459194 +0000 UTC m=+1131.857435313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.724788 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.736040 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.767507 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56"] Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.774313 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q"] Feb 03 10:20:59 crc kubenswrapper[5010]: W0203 10:20:59.794328 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a136ea1_ab68_4f60_8fb2_969363f25337.slice/crio-247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90 WatchSource:0}: Error finding container 247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90: Status 404 returned error can't find the container with id 247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90 Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.965652 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" event={"ID":"a7d72ea1-7126-4768-9cf8-f590ebd216d7","Type":"ContainerStarted","Data":"777584da4ae303e1bee67558c39b19de945ee8851e1de7f3cddcbb09a5faf862"} Feb 03 10:20:59 crc kubenswrapper[5010]: I0203 10:20:59.967262 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" event={"ID":"1a136ea1-ab68-4f60-8fb2-969363f25337","Type":"ContainerStarted","Data":"247073d823e29079b70a880eb5a01130a2597ed24f667e8b834f53d6af4afd90"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.085209 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.218049 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.218102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218248 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218323 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:02.218285638 +0000 UTC m=+1132.374261777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218378 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: E0203 10:21:00.218406 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:02.218397391 +0000 UTC m=+1132.374373520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.327438 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.345683 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.361539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.383843 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.485501 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa8a872_8dc5_4e6d_838a_5dc54e6d4bbe.slice/crio-314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba WatchSource:0}: Error finding container 314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba: Status 404 returned error can't find the container with id 314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.779390 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.809338 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.816299 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8251c193_3c53_4651_87da_8b216cf907aa.slice/crio-67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b WatchSource:0}: Error finding container 67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b: Status 404 returned error can't find the container with id 67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.822456 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.832738 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6"] Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.914153 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj"] Feb 03 10:21:00 crc kubenswrapper[5010]: W0203 10:21:00.916239 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cbbe9fa_4c61_41fc_9a62_41dbaea09a0a.slice/crio-80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a WatchSource:0}: Error finding container 80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a: Status 404 returned error can't find the container with id 80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.977969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" event={"ID":"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe","Type":"ContainerStarted","Data":"314cd8ffccb4bf543aaf592699c50c8d3be532bfb7978dd3fd40059992a22bba"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.979323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" event={"ID":"3e47047f-9303-47e2-8312-c83315e1a3ff","Type":"ContainerStarted","Data":"c63a0db7216ee41563ab86de9ee998a54ea1ea70afdfd4a16c1ba6f2203f310b"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.980544 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" event={"ID":"2f204595-5d98-4c16-b5d1-5004c6cae836","Type":"ContainerStarted","Data":"cd9efe4f3ce1880d64f7b8b57dc176717a78e49c4e3649fa93097dacdb67f0db"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.981706 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" event={"ID":"42f76062-3a9d-45c1-b928-d9ca236ec8ab","Type":"ContainerStarted","Data":"ce9cbe44e818ebc74946896e08243f13a574c52ebf60de90e4365e4039c1c903"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.982968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" event={"ID":"27ab6ab7-e411-466c-bc4a-97d1660c547e","Type":"ContainerStarted","Data":"a23413ee3b29f499b89e8dc8330a4c6e2c4f840dd46371abd5e70fbcf792193f"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.984496 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" event={"ID":"8251c193-3c53-4651-87da-8b216cf907aa","Type":"ContainerStarted","Data":"67cb9f81a95e2b3746c143178f801cc9201360836d4b672048f54115f4fa4b2b"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.985876 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" event={"ID":"9dc494bd-d6ef-4a22-8312-67750ebb3dbe","Type":"ContainerStarted","Data":"6c8f1e6b9f75d5b192f66093034e4d6f58a99c74a14523fa14500727eb106374"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.987040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" event={"ID":"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a","Type":"ContainerStarted","Data":"80d30c684b7fac1a947146f508ed062bd5dd4c014aa41f4b7cb243691925af4a"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.988280 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" event={"ID":"74803e29-48a3-4667-bcdb-a94f381545b5","Type":"ContainerStarted","Data":"6e01976389e1fb3fa323370b3dc0da56c38b304756117a6b78876dd18b07a733"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.990243 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" event={"ID":"7f20ca5f-d244-45be-864d-3b8ad3d456ea","Type":"ContainerStarted","Data":"77a142fe2b9c3b3d5d5de4607bb1f9d5bfd2395c269d99dfa990a0721140f3b6"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.991413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" event={"ID":"fd413d86-2cda-4079-a895-5cb60928a47f","Type":"ContainerStarted","Data":"05e59ea5914ae024ac2ab3d90428654f3fee850a8ccaa9548fcd24dd465e95ed"} Feb 03 10:21:00 crc kubenswrapper[5010]: I0203 10:21:00.992600 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" event={"ID":"d33dc0fd-847b-41cc-a8ac-afde40120ba2","Type":"ContainerStarted","Data":"5e28092feea0417fec92df560efc7fdb66d64913de365dd260ca97018c70d5f3"} Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.121689 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.136929 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.145144 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.145403 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.145508 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:05.145484823 +0000 UTC m=+1135.301460992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.146811 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51fff09_23b1_4bf0_b4e2_eeb2e6ee3c58.slice/crio-4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a WatchSource:0}: Error finding container 4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a: Status 404 returned error can't find the container with id 4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.146712 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-ftqqr"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.155652 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2"] Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.162972 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7"] Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.167131 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62d6669_692b_4909_b192_4348ac82a50d.slice/crio-f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d WatchSource:0}: Error finding container f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d: Status 404 returned error can't find the container with id f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.168736 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks"] Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.172440 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f46dec_fb01_4293_ad08_706eb63a8738.slice/crio-3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf WatchSource:0}: Error finding container 3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf: Status 404 returned error can't find the container with id 3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.172750 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f112d60_8db7_4ec2_a82d_c7627ade05a3.slice/crio-4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e WatchSource:0}: Error finding container 4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e: Status 404 returned error can't find the container with id 4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.176661 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a4f3fa_bbaf_433d_9835_6ac576351651.slice/crio-67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909 WatchSource:0}: Error finding container 67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909: Status 404 returned error can't find the container with id 67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909 Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.178785 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26tml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-t47jc_openstack-operators(21f46dec-fb01-4293-ad08-706eb63a8738): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.178929 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-pwdks_openstack-operators(4f112d60-8db7-4ec2-a82d-c7627ade05a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.180038 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.180084 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:01 crc kubenswrapper[5010]: W0203 10:21:01.180906 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84af1f21_c29e_4846_9ce1_ea345cbad4fc.slice/crio-06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac WatchSource:0}: Error finding container 06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac: Status 404 returned error can't find the container with id 06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.181352 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bldlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-ftqqr_openstack-operators(37a4f3fa-bbaf-433d-9835-6ac576351651): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.182797 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.185804 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9djc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-mrvfq_openstack-operators(84af1f21-c29e-4846-9ce1-ea345cbad4fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.186973 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:01 crc kubenswrapper[5010]: I0203 10:21:01.771888 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.772047 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:01 crc kubenswrapper[5010]: E0203 10:21:01.772088 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:05.772075492 +0000 UTC m=+1135.928051621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.027331 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" event={"ID":"4f112d60-8db7-4ec2-a82d-c7627ade05a3","Type":"ContainerStarted","Data":"4014d8defd7cc40c72c8e5af76f2fbb11a6ee18d3ff5ad690d739d6472bf6f2e"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.029640 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.031784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" event={"ID":"84af1f21-c29e-4846-9ce1-ea345cbad4fc","Type":"ContainerStarted","Data":"06c82c051d3d741f31db1af40db286dd7c40aab5dfa765b63713941b6bf104ac"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.035074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" event={"ID":"21f46dec-fb01-4293-ad08-706eb63a8738","Type":"ContainerStarted","Data":"3f0754bd9e5babedd813570f881635452f0be75353955fbc0465a5388b23dadf"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.035335 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.038079 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.048556 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" event={"ID":"37a4f3fa-bbaf-433d-9835-6ac576351651","Type":"ContainerStarted","Data":"67516141402a2760dc386c4329f04fc6c235f2d8d95e6610c9fd6c1c1d3ab909"} Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.052730 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.053515 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" event={"ID":"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58","Type":"ContainerStarted","Data":"4bfbf9d4f63c9391c4d4f857c540da0940559e2d8d4e353bcb1e788f1790431a"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.066202 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" event={"ID":"a62d6669-692b-4909-b192-4348ac82a50d","Type":"ContainerStarted","Data":"f64984c38128739c7391db832b7bed14b6b51b869203734056195d6793167d0d"} Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.280241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:02 crc kubenswrapper[5010]: I0203 10:21:02.280303 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280443 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280513 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:06.28049522 +0000 UTC m=+1136.436471349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280535 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:02 crc kubenswrapper[5010]: E0203 10:21:02.280645 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:06.280618683 +0000 UTC m=+1136.436594842 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.215123 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podUID="37a4f3fa-bbaf-433d-9835-6ac576351651" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.216463 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.218031 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:03 crc kubenswrapper[5010]: E0203 10:21:03.222591 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podUID="21f46dec-fb01-4293-ad08-706eb63a8738" Feb 03 10:21:05 crc kubenswrapper[5010]: I0203 10:21:05.227424 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.228180 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.228328 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:13.22830873 +0000 UTC m=+1143.384284859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: I0203 10:21:05.860574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.860752 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:05 crc kubenswrapper[5010]: E0203 10:21:05.863864 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:13.86384273 +0000 UTC m=+1144.019818859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: I0203 10:21:06.380160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:06 crc kubenswrapper[5010]: I0203 10:21:06.380232 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380350 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380424 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:14.380406947 +0000 UTC m=+1144.536383076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380442 5010 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 10:21:06 crc kubenswrapper[5010]: E0203 10:21:06.380568 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:14.380545361 +0000 UTC m=+1144.536521580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "metrics-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: I0203 10:21:13.248893 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.249112 5010 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.249560 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert podName:5fafda3f-e0cd-4477-9c10-442af83a835b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:29.249544531 +0000 UTC m=+1159.405520660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert") pod "infra-operator-controller-manager-79955696d6-vlmtm" (UID: "5fafda3f-e0cd-4477-9c10-442af83a835b") : secret "infra-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: I0203 10:21:13.961435 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.961806 5010 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:13 crc kubenswrapper[5010]: E0203 10:21:13.962004 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert podName:76bde002-75f6-4c4a-af3d-16aec5a221f4 nodeName:}" failed. No retries permitted until 2026-02-03 10:21:29.961952394 +0000 UTC m=+1160.117928563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" (UID: "76bde002-75f6-4c4a-af3d-16aec5a221f4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.094935 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.095148 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vghr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-w7ldz_openstack-operators(2f204595-5d98-4c16-b5d1-5004c6cae836): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.097813 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podUID="2f204595-5d98-4c16-b5d1-5004c6cae836" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.381155 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podUID="2f204595-5d98-4c16-b5d1-5004c6cae836" Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.468193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.468396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.468323 5010 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.468478 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs podName:54aaeb1d-8a23-413f-b1f4-5115b167d78b nodeName:}" failed. No retries permitted until 2026-02-03 10:21:30.468460592 +0000 UTC m=+1160.624436721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs") pod "openstack-operator-controller-manager-844f879456-5ktjc" (UID: "54aaeb1d-8a23-413f-b1f4-5115b167d78b") : secret "webhook-server-cert" not found Feb 03 10:21:14 crc kubenswrapper[5010]: I0203 10:21:14.476189 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-metrics-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.823472 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.823688 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r6j7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-d99mj_openstack-operators(8251c193-3c53-4651-87da-8b216cf907aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:14 crc kubenswrapper[5010]: E0203 10:21:14.825033 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podUID="8251c193-3c53-4651-87da-8b216cf907aa" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.397246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podUID="8251c193-3c53-4651-87da-8b216cf907aa" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.778635 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.778837 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znfrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-5lzr6_openstack-operators(27ab6ab7-e411-466c-bc4a-97d1660c547e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:15 crc kubenswrapper[5010]: E0203 10:21:15.780237 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podUID="27ab6ab7-e411-466c-bc4a-97d1660c547e" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390198 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390307 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.390388 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.391386 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:21:16 crc kubenswrapper[5010]: I0203 10:21:16.391455 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" gracePeriod=600 Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.404043 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podUID="27ab6ab7-e411-466c-bc4a-97d1660c547e" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.679849 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.680026 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47896,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-5zbbw_openstack-operators(42f76062-3a9d-45c1-b928-d9ca236ec8ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:16 crc kubenswrapper[5010]: E0203 10:21:16.681383 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podUID="42f76062-3a9d-45c1-b928-d9ca236ec8ab" Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410587 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" exitCode=0 Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410678 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b"} Feb 03 10:21:17 crc kubenswrapper[5010]: I0203 10:21:17.410757 5010 scope.go:117] "RemoveContainer" containerID="9442102e724f69e1d556f61f5773f0e8e33b6a283cb3f40b3f679d223bc6c1e0" Feb 03 10:21:17 crc kubenswrapper[5010]: E0203 10:21:17.413322 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podUID="42f76062-3a9d-45c1-b928-d9ca236ec8ab" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.748677 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.749453 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6zg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-j87lc_openstack-operators(fd413d86-2cda-4079-a895-5cb60928a47f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:19 crc kubenswrapper[5010]: E0203 10:21:19.750720 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podUID="fd413d86-2cda-4079-a895-5cb60928a47f" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.607201 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podUID="fd413d86-2cda-4079-a895-5cb60928a47f" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.787736 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.788033 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pvgrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-g8qz8_openstack-operators(3e47047f-9303-47e2-8312-c83315e1a3ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:20 crc kubenswrapper[5010]: E0203 10:21:20.789757 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podUID="3e47047f-9303-47e2-8312-c83315e1a3ff" Feb 03 10:21:21 crc kubenswrapper[5010]: E0203 10:21:21.610175 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podUID="3e47047f-9303-47e2-8312-c83315e1a3ff" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.870173 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.870879 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rvdn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ck5g7_openstack-operators(e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:28 crc kubenswrapper[5010]: E0203 10:21:28.872109 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podUID="e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.296030 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.302634 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fafda3f-e0cd-4477-9c10-442af83a835b-cert\") pod \"infra-operator-controller-manager-79955696d6-vlmtm\" (UID: \"5fafda3f-e0cd-4477-9c10-442af83a835b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.403334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qfj78" Feb 03 10:21:29 crc kubenswrapper[5010]: I0203 10:21:29.412682 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:29 crc kubenswrapper[5010]: E0203 10:21:29.781740 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podUID="e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.004692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.008780 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/76bde002-75f6-4c4a-af3d-16aec5a221f4-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs\" (UID: \"76bde002-75f6-4c4a-af3d-16aec5a221f4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.075277 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.075526 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k69sw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-gb8tp_openstack-operators(1a136ea1-ab68-4f60-8fb2-969363f25337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.077769 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podUID="1a136ea1-ab68-4f60-8fb2-969363f25337" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.106055 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bqqr5" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.114972 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.513116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.529729 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/54aaeb1d-8a23-413f-b1f4-5115b167d78b-webhook-certs\") pod \"openstack-operator-controller-manager-844f879456-5ktjc\" (UID: \"54aaeb1d-8a23-413f-b1f4-5115b167d78b\") " pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.692145 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-frpdt" Feb 03 10:21:30 crc kubenswrapper[5010]: I0203 10:21:30.700626 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.716967 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.717176 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9djc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-mrvfq_openstack-operators(84af1f21-c29e-4846-9ce1-ea345cbad4fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.718367 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:30 crc kubenswrapper[5010]: E0203 10:21:30.727717 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podUID="1a136ea1-ab68-4f60-8fb2-969363f25337" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.089611 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.089782 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2zwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kj7mj_openstack-operators(2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.091357 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podUID="2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a" Feb 03 10:21:31 crc kubenswrapper[5010]: E0203 10:21:31.725524 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podUID="2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.630624 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.631105 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mblb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-pwdks_openstack-operators(4f112d60-8db7-4ec2-a82d-c7627ade05a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:21:33 crc kubenswrapper[5010]: E0203 10:21:33.632337 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.573925 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs"] Feb 03 10:21:34 crc kubenswrapper[5010]: W0203 10:21:34.605355 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76bde002_75f6_4c4a_af3d_16aec5a221f4.slice/crio-f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea WatchSource:0}: Error finding container f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea: Status 404 returned error can't find the container with id f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.933862 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" event={"ID":"a62d6669-692b-4909-b192-4348ac82a50d","Type":"ContainerStarted","Data":"b0b3ad05967ae6837dabe42486b28a7079a2e88e24fcb5f3a59ea9f9e247288a"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.937601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.941238 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" event={"ID":"d33dc0fd-847b-41cc-a8ac-afde40120ba2","Type":"ContainerStarted","Data":"00fa265647cf1f7b9c13346c1838550c71fe45e20a23ade146b5e8d1e4e0627b"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.942292 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.944239 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" event={"ID":"9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe","Type":"ContainerStarted","Data":"7a7eff23bd74867bd0a9ddd288af7e1fff4887a78a8f58023966f0ff012f268e"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.944900 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.947365 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" event={"ID":"2f204595-5d98-4c16-b5d1-5004c6cae836","Type":"ContainerStarted","Data":"142936888d5fafbfc7cbebaf4db2afb9b61a022deca7cdfd0c81a0336697efd0"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.948549 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.950928 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" event={"ID":"27ab6ab7-e411-466c-bc4a-97d1660c547e","Type":"ContainerStarted","Data":"477c93c95c54195257e09fda2612cee052d9e515071d3b6caca81a04a814e2f6"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.951546 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.952889 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" event={"ID":"76bde002-75f6-4c4a-af3d-16aec5a221f4","Type":"ContainerStarted","Data":"f4773bff07a4dfd1cfebdf7b2002157cc6730b642e68e69c1edafe73ec7917ea"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.954341 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" event={"ID":"9dc494bd-d6ef-4a22-8312-67750ebb3dbe","Type":"ContainerStarted","Data":"898bd508c6b7348e0a4dff6ed01fd54493d4e41e40364a9b3af8e0e4d29f585c"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.955123 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.962616 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" event={"ID":"74803e29-48a3-4667-bcdb-a94f381545b5","Type":"ContainerStarted","Data":"eacc45b0c3dcafd851d243b297203ba1375c484f7d82674035f86e3ba800be39"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.964386 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.965883 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" event={"ID":"a7d72ea1-7126-4768-9cf8-f590ebd216d7","Type":"ContainerStarted","Data":"f26f3bf9b553bfabf2e4d7cc30f713eac02d83838ab543ccaf1eacf4c9fb3c56"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.966447 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.969105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" event={"ID":"37a4f3fa-bbaf-433d-9835-6ac576351651","Type":"ContainerStarted","Data":"001695bd6766de9969188a95ab07b3e467ac326b310bc0115504d112185eb457"} Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.969780 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:21:34 crc kubenswrapper[5010]: I0203 10:21:34.972205 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.017330 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" podStartSLOduration=9.129402895 podStartE2EDuration="38.017313249s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.169699163 +0000 UTC m=+1131.325675292" lastFinishedPulling="2026-02-03 10:21:30.057609517 +0000 UTC m=+1160.213585646" observedRunningTime="2026-02-03 10:21:35.01657577 +0000 UTC m=+1165.172551899" watchObservedRunningTime="2026-02-03 10:21:35.017313249 +0000 UTC m=+1165.173289378" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.081411 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" podStartSLOduration=5.046628529 podStartE2EDuration="38.081388514s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.820760829 +0000 UTC m=+1130.976736968" lastFinishedPulling="2026-02-03 10:21:33.855520824 +0000 UTC m=+1164.011496953" observedRunningTime="2026-02-03 10:21:35.077696079 +0000 UTC m=+1165.233672208" watchObservedRunningTime="2026-02-03 10:21:35.081388514 +0000 UTC m=+1165.237364653" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.115147 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm"] Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.127830 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" podStartSLOduration=8.129103885 podStartE2EDuration="38.127807615s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.059737869 +0000 UTC m=+1130.215713998" lastFinishedPulling="2026-02-03 10:21:30.058441609 +0000 UTC m=+1160.214417728" observedRunningTime="2026-02-03 10:21:35.116670769 +0000 UTC m=+1165.272646898" watchObservedRunningTime="2026-02-03 10:21:35.127807615 +0000 UTC m=+1165.283783744" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.149649 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" podStartSLOduration=7.208232112 podStartE2EDuration="38.149634555s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.517019374 +0000 UTC m=+1130.672995493" lastFinishedPulling="2026-02-03 10:21:31.458421807 +0000 UTC m=+1161.614397936" observedRunningTime="2026-02-03 10:21:35.147487 +0000 UTC m=+1165.303463139" watchObservedRunningTime="2026-02-03 10:21:35.149634555 +0000 UTC m=+1165.305610674" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.518726 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" podStartSLOduration=6.890456467 podStartE2EDuration="38.518698826s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:20:59.830105196 +0000 UTC m=+1129.986081335" lastFinishedPulling="2026-02-03 10:21:31.458347565 +0000 UTC m=+1161.614323694" observedRunningTime="2026-02-03 10:21:35.173962439 +0000 UTC m=+1165.329938558" watchObservedRunningTime="2026-02-03 10:21:35.518698826 +0000 UTC m=+1165.674674945" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.595295 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc"] Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.607141 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" podStartSLOduration=7.625237124 podStartE2EDuration="38.607121206s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476452303 +0000 UTC m=+1130.632428432" lastFinishedPulling="2026-02-03 10:21:31.458336395 +0000 UTC m=+1161.614312514" observedRunningTime="2026-02-03 10:21:35.582508224 +0000 UTC m=+1165.738484353" watchObservedRunningTime="2026-02-03 10:21:35.607121206 +0000 UTC m=+1165.763097335" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.624474 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" podStartSLOduration=5.965803117 podStartE2EDuration="38.62445216s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.181202219 +0000 UTC m=+1131.337178348" lastFinishedPulling="2026-02-03 10:21:33.839851222 +0000 UTC m=+1163.995827391" observedRunningTime="2026-02-03 10:21:35.606271214 +0000 UTC m=+1165.762247333" watchObservedRunningTime="2026-02-03 10:21:35.62445216 +0000 UTC m=+1165.780428289" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.641115 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" podStartSLOduration=8.643191479 podStartE2EDuration="38.641094178s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.059667237 +0000 UTC m=+1130.215643366" lastFinishedPulling="2026-02-03 10:21:30.057569906 +0000 UTC m=+1160.213546065" observedRunningTime="2026-02-03 10:21:35.635708779 +0000 UTC m=+1165.791684908" watchObservedRunningTime="2026-02-03 10:21:35.641094178 +0000 UTC m=+1165.797070307" Feb 03 10:21:35 crc kubenswrapper[5010]: I0203 10:21:35.808851 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" podStartSLOduration=5.241285995 podStartE2EDuration="38.808825852s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.190970877 +0000 UTC m=+1130.346947006" lastFinishedPulling="2026-02-03 10:21:33.758510724 +0000 UTC m=+1163.914486863" observedRunningTime="2026-02-03 10:21:35.802973382 +0000 UTC m=+1165.958949511" watchObservedRunningTime="2026-02-03 10:21:35.808825852 +0000 UTC m=+1165.964801981" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.011730 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" event={"ID":"5fafda3f-e0cd-4477-9c10-442af83a835b","Type":"ContainerStarted","Data":"8ba28391a4f869facdc74d9bbf111998a780bf4d3143a6a7cdd54015d0bbd3e8"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.334052 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" event={"ID":"7f20ca5f-d244-45be-864d-3b8ad3d456ea","Type":"ContainerStarted","Data":"b81bf8e6753ad7f4df6d5ed304c0dc056977b0f6c4cbc6f464d8ef6777a17d21"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.334453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.359692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" event={"ID":"fd413d86-2cda-4079-a895-5cb60928a47f","Type":"ContainerStarted","Data":"c4738166e5c9501693a4f5252feb4adcf787d48f49829efdb460557e6325468b"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.360808 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.366513 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" podStartSLOduration=10.118355435 podStartE2EDuration="39.366488443s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.810319181 +0000 UTC m=+1130.966295310" lastFinishedPulling="2026-02-03 10:21:30.058452189 +0000 UTC m=+1160.214428318" observedRunningTime="2026-02-03 10:21:36.3643953 +0000 UTC m=+1166.520371519" watchObservedRunningTime="2026-02-03 10:21:36.366488443 +0000 UTC m=+1166.522464582" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.397946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" event={"ID":"8251c193-3c53-4651-87da-8b216cf907aa","Type":"ContainerStarted","Data":"02e6d18766df28ae2be61477770b6c60ba7708062cca853adf079caf02116663"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.398793 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.403968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" event={"ID":"54aaeb1d-8a23-413f-b1f4-5115b167d78b","Type":"ContainerStarted","Data":"48e48eecc18118f5bf419511c50740bd07d8554f90e6ce9edac04b8f39285f60"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.407765 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" event={"ID":"42f76062-3a9d-45c1-b928-d9ca236ec8ab","Type":"ContainerStarted","Data":"95909e18d67efe0e0f957f06e6075c59341efd7d7470d60d6dfa2aceb48ca170"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.408345 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.409729 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" event={"ID":"21f46dec-fb01-4293-ad08-706eb63a8738","Type":"ContainerStarted","Data":"405c144a207ec53c2b358d4018acacf1d21418a9966232e0e12c9913c6b94d36"} Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.410066 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.645171 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" podStartSLOduration=6.242400987 podStartE2EDuration="39.645153005s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476522615 +0000 UTC m=+1130.632498744" lastFinishedPulling="2026-02-03 10:21:33.879274623 +0000 UTC m=+1164.035250762" observedRunningTime="2026-02-03 10:21:36.640559977 +0000 UTC m=+1166.796536106" watchObservedRunningTime="2026-02-03 10:21:36.645153005 +0000 UTC m=+1166.801129134" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.666907 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" podStartSLOduration=6.62983706 podStartE2EDuration="39.666888423s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.822081263 +0000 UTC m=+1130.978057392" lastFinishedPulling="2026-02-03 10:21:33.859132626 +0000 UTC m=+1164.015108755" observedRunningTime="2026-02-03 10:21:36.663056894 +0000 UTC m=+1166.819033033" watchObservedRunningTime="2026-02-03 10:21:36.666888423 +0000 UTC m=+1166.822864552" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.782936 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" podStartSLOduration=7.09945412 podStartE2EDuration="39.78290138s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.178620032 +0000 UTC m=+1131.334596161" lastFinishedPulling="2026-02-03 10:21:33.862067292 +0000 UTC m=+1164.018043421" observedRunningTime="2026-02-03 10:21:36.777485791 +0000 UTC m=+1166.933461930" watchObservedRunningTime="2026-02-03 10:21:36.78290138 +0000 UTC m=+1166.938877519" Feb 03 10:21:36 crc kubenswrapper[5010]: I0203 10:21:36.785962 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" podStartSLOduration=6.7354720310000005 podStartE2EDuration="39.785941848s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.811647886 +0000 UTC m=+1130.967624025" lastFinishedPulling="2026-02-03 10:21:33.862117693 +0000 UTC m=+1164.018093842" observedRunningTime="2026-02-03 10:21:36.684606557 +0000 UTC m=+1166.840582696" watchObservedRunningTime="2026-02-03 10:21:36.785941848 +0000 UTC m=+1166.941917987" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.453667 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" event={"ID":"3e47047f-9303-47e2-8312-c83315e1a3ff","Type":"ContainerStarted","Data":"2bfd27ba413791f9894c8dae8f8c75fb06555b31d65ce43e9d97cafd9632186a"} Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.453946 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.458388 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" event={"ID":"54aaeb1d-8a23-413f-b1f4-5115b167d78b","Type":"ContainerStarted","Data":"804414e75040674ae44ab56bccf0047302fe31aeb07b77b3f85749d554e2f554"} Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.801824 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" podStartSLOduration=39.801801157 podStartE2EDuration="39.801801157s" podCreationTimestamp="2026-02-03 10:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:21:37.788796954 +0000 UTC m=+1167.944773093" watchObservedRunningTime="2026-02-03 10:21:37.801801157 +0000 UTC m=+1167.957777296" Feb 03 10:21:37 crc kubenswrapper[5010]: I0203 10:21:37.802617 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" podStartSLOduration=4.631867124 podStartE2EDuration="40.802609768s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.476451053 +0000 UTC m=+1130.632427182" lastFinishedPulling="2026-02-03 10:21:36.647193697 +0000 UTC m=+1166.803169826" observedRunningTime="2026-02-03 10:21:37.754476643 +0000 UTC m=+1167.910452772" watchObservedRunningTime="2026-02-03 10:21:37.802609768 +0000 UTC m=+1167.958585897" Feb 03 10:21:38 crc kubenswrapper[5010]: I0203 10:21:38.470010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:39 crc kubenswrapper[5010]: I0203 10:21:39.368157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-ftqqr" Feb 03 10:21:39 crc kubenswrapper[5010]: I0203 10:21:39.373653 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-pgwx2" Feb 03 10:21:42 crc kubenswrapper[5010]: E0203 10:21:42.503311 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podUID="84af1f21-c29e-4846-9ce1-ea345cbad4fc" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.573604 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" event={"ID":"76bde002-75f6-4c4a-af3d-16aec5a221f4","Type":"ContainerStarted","Data":"90c54b39b42c73082f43ed2105ae72b9c62f82eb4b2d22238ce3746be666885c"} Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.574141 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.576669 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" event={"ID":"e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58","Type":"ContainerStarted","Data":"4784f5c4498de9a6f020a44a7d688652b4c1a311da24c75fd931088641891823"} Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.577590 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.656987 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" podStartSLOduration=38.575914227 podStartE2EDuration="46.656955677s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:34.608103167 +0000 UTC m=+1164.764079296" lastFinishedPulling="2026-02-03 10:21:42.689144617 +0000 UTC m=+1172.845120746" observedRunningTime="2026-02-03 10:21:43.636846909 +0000 UTC m=+1173.792823038" watchObservedRunningTime="2026-02-03 10:21:43.656955677 +0000 UTC m=+1173.812931806" Feb 03 10:21:43 crc kubenswrapper[5010]: I0203 10:21:43.662190 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" podStartSLOduration=5.126418678 podStartE2EDuration="46.662168411s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.154570965 +0000 UTC m=+1131.310547104" lastFinishedPulling="2026-02-03 10:21:42.690320708 +0000 UTC m=+1172.846296837" observedRunningTime="2026-02-03 10:21:43.657346167 +0000 UTC m=+1173.813322296" watchObservedRunningTime="2026-02-03 10:21:43.662168411 +0000 UTC m=+1173.818144540" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.623206 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" event={"ID":"5fafda3f-e0cd-4477-9c10-442af83a835b","Type":"ContainerStarted","Data":"357d69064366b42c434470b17e36ea54790f1db35252cfc36c0312f802d971a9"} Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.623671 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.625537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" event={"ID":"2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a","Type":"ContainerStarted","Data":"107ab28d6491cdb0d441844d7e5d6fcf9652c74749468dd746510ff199dc9cc2"} Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.714764 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" podStartSLOduration=39.245763804 podStartE2EDuration="48.714750235s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:35.138357806 +0000 UTC m=+1165.294333935" lastFinishedPulling="2026-02-03 10:21:44.607344237 +0000 UTC m=+1174.763320366" observedRunningTime="2026-02-03 10:21:45.711749868 +0000 UTC m=+1175.867725997" watchObservedRunningTime="2026-02-03 10:21:45.714750235 +0000 UTC m=+1175.870726364" Feb 03 10:21:45 crc kubenswrapper[5010]: I0203 10:21:45.739957 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kj7mj" podStartSLOduration=3.572468826 podStartE2EDuration="47.739940084s" podCreationTimestamp="2026-02-03 10:20:58 +0000 UTC" firstStartedPulling="2026-02-03 10:21:00.918722453 +0000 UTC m=+1131.074698582" lastFinishedPulling="2026-02-03 10:21:45.086193711 +0000 UTC m=+1175.242169840" observedRunningTime="2026-02-03 10:21:45.737389409 +0000 UTC m=+1175.893365538" watchObservedRunningTime="2026-02-03 10:21:45.739940084 +0000 UTC m=+1175.895916213" Feb 03 10:21:46 crc kubenswrapper[5010]: E0203 10:21:46.503494 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podUID="4f112d60-8db7-4ec2-a82d-c7627ade05a3" Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.874757 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" event={"ID":"1a136ea1-ab68-4f60-8fb2-969363f25337","Type":"ContainerStarted","Data":"490269a44640bb9f1a9df7f0d361e3bafd3214e09c6d6e9bdb60100714018690"} Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.875553 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:21:46 crc kubenswrapper[5010]: I0203 10:21:46.924024 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" podStartSLOduration=3.586910559 podStartE2EDuration="49.923998899s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:20:59.801514512 +0000 UTC m=+1129.957490641" lastFinishedPulling="2026-02-03 10:21:46.138602852 +0000 UTC m=+1176.294578981" observedRunningTime="2026-02-03 10:21:46.898537162 +0000 UTC m=+1177.054513291" watchObservedRunningTime="2026-02-03 10:21:46.923998899 +0000 UTC m=+1177.079975028" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.422010 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-52g72" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.431106 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jvb56" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.483411 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gnxws" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.517688 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-j87lc" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.693608 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-7szqs" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.697335 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-k765q" Feb 03 10:21:47 crc kubenswrapper[5010]: I0203 10:21:47.945694 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-w7ldz" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.422792 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-5zbbw" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.427419 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-qrkwl" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.428194 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-t47jc" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.470809 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-g8qz8" Feb 03 10:21:48 crc kubenswrapper[5010]: I0203 10:21:48.748835 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ck5g7" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.181389 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-5lzr6" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.477534 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-vlmtm" Feb 03 10:21:49 crc kubenswrapper[5010]: I0203 10:21:49.490355 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-d99mj" Feb 03 10:21:50 crc kubenswrapper[5010]: I0203 10:21:50.120406 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs" Feb 03 10:21:50 crc kubenswrapper[5010]: I0203 10:21:50.769375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-844f879456-5ktjc" Feb 03 10:21:57 crc kubenswrapper[5010]: I0203 10:21:57.649735 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gb8tp" Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.952469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" event={"ID":"84af1f21-c29e-4846-9ce1-ea345cbad4fc","Type":"ContainerStarted","Data":"0cc548162ab45320514953fc43721b0edb440b21ad672ac052b4678a26b3d148"} Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.953553 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:21:58 crc kubenswrapper[5010]: I0203 10:21:58.967342 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" podStartSLOduration=5.091378551 podStartE2EDuration="1m1.967321317s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.185631662 +0000 UTC m=+1131.341607791" lastFinishedPulling="2026-02-03 10:21:58.061574408 +0000 UTC m=+1188.217550557" observedRunningTime="2026-02-03 10:21:58.965483889 +0000 UTC m=+1189.121460028" watchObservedRunningTime="2026-02-03 10:21:58.967321317 +0000 UTC m=+1189.123297446" Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.965118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" event={"ID":"4f112d60-8db7-4ec2-a82d-c7627ade05a3","Type":"ContainerStarted","Data":"c84c019987a09666223fed742f6b03c976bf9021baab3ece6c66c96a4a605018"} Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.965581 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:22:00 crc kubenswrapper[5010]: I0203 10:22:00.984437 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" podStartSLOduration=5.221696533 podStartE2EDuration="1m3.984415427s" podCreationTimestamp="2026-02-03 10:20:57 +0000 UTC" firstStartedPulling="2026-02-03 10:21:01.178795777 +0000 UTC m=+1131.334771906" lastFinishedPulling="2026-02-03 10:21:59.941514671 +0000 UTC m=+1190.097490800" observedRunningTime="2026-02-03 10:22:00.978359821 +0000 UTC m=+1191.134335950" watchObservedRunningTime="2026-02-03 10:22:00.984415427 +0000 UTC m=+1191.140391556" Feb 03 10:22:08 crc kubenswrapper[5010]: I0203 10:22:08.426790 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pwdks" Feb 03 10:22:09 crc kubenswrapper[5010]: I0203 10:22:09.147629 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-mrvfq" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.440059 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.445478 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447590 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tzv47" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447935 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.447984 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.449552 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.453142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.495038 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.496259 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.498460 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.506841 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597120 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597516 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597726 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.597804 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699709 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699845 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.699863 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.700763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.700981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.701571 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.718588 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"dnsmasq-dns-78dd6ddcc-k9cm6\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.720330 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"dnsmasq-dns-675f4bcbfc-lkm9t\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.763907 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:22:23 crc kubenswrapper[5010]: I0203 10:22:23.809751 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:22:24 crc kubenswrapper[5010]: I0203 10:22:24.577648 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:24 crc kubenswrapper[5010]: I0203 10:22:24.677530 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:25 crc kubenswrapper[5010]: I0203 10:22:25.257085 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" event={"ID":"05e75df7-a63f-4821-8aa1-79b20fe2e100","Type":"ContainerStarted","Data":"9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59"} Feb 03 10:22:25 crc kubenswrapper[5010]: I0203 10:22:25.259666 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" event={"ID":"6fec8d31-6436-4bfa-aae8-154ca2b74cf2","Type":"ContainerStarted","Data":"d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9"} Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.559505 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.585632 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.590793 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.624775 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686962 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.686988 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.787866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.787924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.788020 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.789070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.789763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.812987 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.816525 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"dnsmasq-dns-666b6646f7-kpzlc\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.860515 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.863646 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.865785 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890336 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.890359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.928654 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994490 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994559 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.994588 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.995474 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:26 crc kubenswrapper[5010]: I0203 10:22:26.995777 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.014552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"dnsmasq-dns-57d769cc4f-g56qr\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.208722 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.522899 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:22:27 crc kubenswrapper[5010]: W0203 10:22:27.538769 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86085e66_cdd4_45aa_af20_f8856cdfed1c.slice/crio-e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5 WatchSource:0}: Error finding container e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5: Status 404 returned error can't find the container with id e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5 Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.688599 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.689673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699157 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699439 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.699600 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.702093 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709240 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9nfm9" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709274 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709386 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.709240 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.725640 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:27 crc kubenswrapper[5010]: W0203 10:22:27.746547 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode75b7259_a771_487b_9d36_990ce8571c11.slice/crio-474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03 WatchSource:0}: Error finding container 474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03: Status 404 returned error can't find the container with id 474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03 Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821009 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821126 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821182 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821223 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821280 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821313 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821348 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.821380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923381 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923727 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923821 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923845 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923863 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.923888 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.924649 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.924683 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.926714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.926879 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.927091 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.927812 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.930015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.934626 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.935530 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.943806 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.948573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.950080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " pod="openstack/rabbitmq-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.982501 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.988974 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.992206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ld7g9" Feb 03 10:22:27 crc kubenswrapper[5010]: I0203 10:22:27.995274 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000335 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000368 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000335 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000697 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.000713 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.007157 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.035822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.126988 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127076 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127175 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127200 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127232 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127260 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.127368 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350685 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350705 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350724 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350772 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350836 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.350889 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352207 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352433 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.352433 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.353086 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.358764 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.359547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.364533 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.367070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.375702 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" event={"ID":"86085e66-cdd4-45aa-af20-f8856cdfed1c","Type":"ContainerStarted","Data":"e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5"} Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.375803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" event={"ID":"e75b7259-a771-487b-9d36-990ce8571c11","Type":"ContainerStarted","Data":"474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03"} Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.377573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.383312 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.617028 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.921415 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.922999 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.928096 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939079 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939154 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.939435 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9rf4l" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.940577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.966265 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 10:22:28 crc kubenswrapper[5010]: I0203 10:22:28.987013 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110196 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110469 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110511 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110624 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.110665 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211249 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211297 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211328 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211376 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211465 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.211616 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.212966 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-kolla-config\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.213773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-default\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.222716 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.224561 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/449f0b91-9186-4a16-b1b4-7f199b57a428-operator-scripts\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.225199 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/449f0b91-9186-4a16-b1b4-7f199b57a428-config-data-generated\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.247027 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/449f0b91-9186-4a16-b1b4-7f199b57a428-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.274482 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bng9\" (UniqueName: \"kubernetes.io/projected/449f0b91-9186-4a16-b1b4-7f199b57a428-kube-api-access-6bng9\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.289374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"449f0b91-9186-4a16-b1b4-7f199b57a428\") " pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.561993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 10:22:29 crc kubenswrapper[5010]: I0203 10:22:29.596915 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"97cdcebe285a4f7a484868c96029b1b0d97151d7f63016f73836ed870ad4197d"} Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.279321 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: W0203 10:22:30.353031 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2066c8b_8b89_4dcb_972d_aea4dcd1c105.slice/crio-6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b WatchSource:0}: Error finding container 6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b: Status 404 returned error can't find the container with id 6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.761493 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b"} Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772900 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.772917 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.773516 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.779165 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.779400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.783725 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b99c2" Feb 03 10:22:30 crc kubenswrapper[5010]: W0203 10:22:30.798860 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod449f0b91_9186_4a16_b1b4_7f199b57a428.slice/crio-ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f WatchSource:0}: Error finding container ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f: Status 404 returned error can't find the container with id ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.862326 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.864773 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874274 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vvbf9" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874776 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.874978 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.875139 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.880812 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967751 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967854 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.967975 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968046 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968355 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968538 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968804 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968835 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:30 crc kubenswrapper[5010]: I0203 10:22:30.968911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070875 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.070982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071011 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071061 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071567 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071637 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071735 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.071758 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.072124 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.072432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/87eb5dd8-7171-457a-8a95-eda98893319a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073390 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-kolla-config\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.073662 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95adc2d1-1093-484e-8580-53e244b420c8-config-data\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.076523 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.086444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87eb5dd8-7171-457a-8a95-eda98893319a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.116878 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.129845 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.130633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddg4\" (UniqueName: \"kubernetes.io/projected/87eb5dd8-7171-457a-8a95-eda98893319a-kube-api-access-8ddg4\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.186770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.190821 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95adc2d1-1093-484e-8580-53e244b420c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.192541 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87eb5dd8-7171-457a-8a95-eda98893319a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"87eb5dd8-7171-457a-8a95-eda98893319a\") " pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.194479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvhp\" (UniqueName: \"kubernetes.io/projected/95adc2d1-1093-484e-8580-53e244b420c8-kube-api-access-xpvhp\") pod \"memcached-0\" (UID: \"95adc2d1-1093-484e-8580-53e244b420c8\") " pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.199040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.457048 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 10:22:31 crc kubenswrapper[5010]: I0203 10:22:31.715912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"ada8e281fc672f2f7e83dfdda7529a9550b6b63bc9b50aeea13aa8c29edd7a6f"} Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.573423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 10:22:32 crc kubenswrapper[5010]: W0203 10:22:32.615798 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eb5dd8_7171_457a_8a95_eda98893319a.slice/crio-ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61 WatchSource:0}: Error finding container ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61: Status 404 returned error can't find the container with id ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61 Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.900257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"ac746dd6dbe76f98fc4607d0c5969d9b64edb8eb2959f5f5320b75e4d2506d61"} Feb 03 10:22:32 crc kubenswrapper[5010]: I0203 10:22:32.914055 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 10:22:32 crc kubenswrapper[5010]: W0203 10:22:32.927455 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95adc2d1_1093_484e_8580_53e244b420c8.slice/crio-44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12 WatchSource:0}: Error finding container 44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12: Status 404 returned error can't find the container with id 44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12 Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.470821 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.472093 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.475371 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-k6brw" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.492023 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.615794 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.796607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:33 crc kubenswrapper[5010]: I0203 10:22:33.823007 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"kube-state-metrics-0\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " pod="openstack/kube-state-metrics-0" Feb 03 10:22:34 crc kubenswrapper[5010]: I0203 10:22:34.033083 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95adc2d1-1093-484e-8580-53e244b420c8","Type":"ContainerStarted","Data":"44a0cbbfa053a4752d74f2cc1c60947bdf93e07f00c67505fbedc9b010e9ea12"} Feb 03 10:22:34 crc kubenswrapper[5010]: I0203 10:22:34.101354 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:22:35 crc kubenswrapper[5010]: I0203 10:22:35.283625 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.187247 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerStarted","Data":"99eae2ce273fff1db7b69f1325ef839ad84ecc780d3634ec59776f868fb7d556"} Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.237648 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.239030 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pwcwc" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247278 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.247436 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.267765 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.307857 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.310567 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.324226 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485315 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485383 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485450 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485475 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485565 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485615 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485686 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.485769 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591633 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591720 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591771 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591842 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591861 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591875 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.591985 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592082 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592673 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-log\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592847 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-run\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.592949 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-var-lib\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.596602 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1883c30e-4c38-468d-a5dc-91b07f167d67-scripts\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597195 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597291 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-run\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.597395 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1883c30e-4c38-468d-a5dc-91b07f167d67-var-log-ovn\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.599792 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b2780eb3-7b7a-47fe-bda0-2605419df774-etc-ovs\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.600722 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-ovn-controller-tls-certs\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.604551 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b2780eb3-7b7a-47fe-bda0-2605419df774-scripts\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.615425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1883c30e-4c38-468d-a5dc-91b07f167d67-combined-ca-bundle\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.737384 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f2fk\" (UniqueName: \"kubernetes.io/projected/b2780eb3-7b7a-47fe-bda0-2605419df774-kube-api-access-7f2fk\") pod \"ovn-controller-ovs-krnr5\" (UID: \"b2780eb3-7b7a-47fe-bda0-2605419df774\") " pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.749060 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7xp5\" (UniqueName: \"kubernetes.io/projected/1883c30e-4c38-468d-a5dc-91b07f167d67-kube-api-access-d7xp5\") pod \"ovn-controller-ql6ht\" (UID: \"1883c30e-4c38-468d-a5dc-91b07f167d67\") " pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.870724 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht" Feb 03 10:22:36 crc kubenswrapper[5010]: I0203 10:22:36.935970 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.253421 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.255112 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.260380 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414119 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414440 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.414593 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.416400 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.416466 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-btqnv" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535022 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535396 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535446 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535499 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535568 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.535668 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758454 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758491 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758530 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758582 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758736 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.758791 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.759556 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.760435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-config\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.762179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.766268 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.773848 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.775612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.783832 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.789782 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzkw\" (UniqueName: \"kubernetes.io/projected/6d6abf1f-9905-4f96-8d44-d7ef3f9f299d-kube-api-access-cbzkw\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.815844 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht"] Feb 03 10:22:37 crc kubenswrapper[5010]: I0203 10:22:37.827450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d\") " pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.064347 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.441162 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht" event={"ID":"1883c30e-4c38-468d-a5dc-91b07f167d67","Type":"ContainerStarted","Data":"df053411d5d4bb018dc2b0b44a4dbe6e7facb3606a27d941f148d61d371e3c8e"} Feb 03 10:22:38 crc kubenswrapper[5010]: I0203 10:22:38.852051 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-krnr5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.605421 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.612196 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.624466 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.638393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.645959 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.699846 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.699910 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700728 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700747 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.700793 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.802973 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803016 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803042 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovn-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.803595 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5235b9fc-3723-4d8a-9851-e8ee89c0b084-ovs-rundir\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.804256 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5235b9fc-3723-4d8a-9851-e8ee89c0b084-config\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.811513 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-combined-ca-bundle\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.810933 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5235b9fc-3723-4d8a-9851-e8ee89c0b084-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.830084 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcq87\" (UniqueName: \"kubernetes.io/projected/5235b9fc-3723-4d8a-9851-e8ee89c0b084-kube-api-access-mcq87\") pod \"ovn-controller-metrics-vqkq5\" (UID: \"5235b9fc-3723-4d8a-9851-e8ee89c0b084\") " pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:39 crc kubenswrapper[5010]: I0203 10:22:39.961844 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqkq5" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.279017 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.297449 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.302609 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.309144 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.414397 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444147 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444251 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.444325 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.445269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.568969 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569027 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.569147 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.574365 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.576633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.577801 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.597592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"dnsmasq-dns-7fd796d7df-84hts\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.643058 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.756833 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.767499 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.770206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f9vnn" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.771131 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.772657 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.772822 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.792757 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881263 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881348 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881412 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881463 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881529 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881594 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:40 crc kubenswrapper[5010]: I0203 10:22:40.881669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.070788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.070950 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071093 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071185 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071295 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071329 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.071356 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.072263 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.098714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.105843 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-config\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.110242 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.112482 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.113151 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.126828 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.136340 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqcj\" (UniqueName: \"kubernetes.io/projected/6dfa0a64-db8a-457a-8eff-f27ffa8e02ce-kube-api-access-dzqcj\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.141240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce\") " pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:41 crc kubenswrapper[5010]: I0203 10:22:41.391244 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 10:22:46 crc kubenswrapper[5010]: W0203 10:22:46.627436 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2780eb3_7b7a_47fe_bda0_2605419df774.slice/crio-1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e WatchSource:0}: Error finding container 1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e: Status 404 returned error can't find the container with id 1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e Feb 03 10:22:46 crc kubenswrapper[5010]: W0203 10:22:46.630655 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6abf1f_9905_4f96_8d44_d7ef3f9f299d.slice/crio-ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd WatchSource:0}: Error finding container ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd: Status 404 returned error can't find the container with id ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd Feb 03 10:22:46 crc kubenswrapper[5010]: I0203 10:22:46.762824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"1a90c9b04425811a0ef9e3b3afcff1dbb033a87c45ec805ac4dc4671e2408c1e"} Feb 03 10:22:46 crc kubenswrapper[5010]: I0203 10:22:46.764699 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"ea26054dd17af5b2535f663a7e1af4a481da73710705cbe70c508a6d73769fbd"} Feb 03 10:22:47 crc kubenswrapper[5010]: I0203 10:22:47.987043 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.474780 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.475741 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5cfh5d5h695h6bh696h5h655h554h95h67h65bhf5h65fh567h545h5bbh67ch578h558h56h8hchf5h5bch59chbfh8bh667h647h5b6h79h5ffq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xpvhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(95adc2d1-1093-484e-8580-53e244b420c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.476905 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="95adc2d1-1093-484e-8580-53e244b420c8" Feb 03 10:22:53 crc kubenswrapper[5010]: E0203 10:22:53.848530 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="95adc2d1-1093-484e-8580-53e244b420c8" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.801408 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.801967 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkwkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f2066c8b-8b89-4dcb-972d-aea4dcd1c105): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.803636 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.836482 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.836721 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5rwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2ce83ed2-cbef-4045-8822-6f58268b28b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.837928 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.854901 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" Feb 03 10:22:54 crc kubenswrapper[5010]: E0203 10:22:54.855036 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" Feb 03 10:22:57 crc kubenswrapper[5010]: W0203 10:22:57.021637 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ea6e430_f9a6_4850_b58e_24ac04fd49a2.slice/crio-b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6 WatchSource:0}: Error finding container b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6: Status 404 returned error can't find the container with id b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6 Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.048738 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.048986 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(449f0b91-9186-4a16-b1b4-7f199b57a428): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.050671 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="449f0b91-9186-4a16-b1b4-7f199b57a428" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.076430 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.076592 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ddg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(87eb5dd8-7171-457a-8a95-eda98893319a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.077684 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="87eb5dd8-7171-457a-8a95-eda98893319a" Feb 03 10:22:57 crc kubenswrapper[5010]: I0203 10:22:57.871824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerStarted","Data":"b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6"} Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.874847 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="87eb5dd8-7171-457a-8a95-eda98893319a" Feb 03 10:22:57 crc kubenswrapper[5010]: E0203 10:22:57.874914 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="449f0b91-9186-4a16-b1b4-7f199b57a428" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.627103 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.627273 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch64ch98h5fhbch679h649h548h55h5f4h5c8h7fh686h677h5c5h5bh5b7h657h67dh58bh77h68ch76h564h9h5fch5f7hb8h54ch649h98h74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7f2fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-krnr5_openstack(b2780eb3-7b7a-47fe-bda0-2605419df774): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.628420 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-krnr5" podUID="b2780eb3-7b7a-47fe-bda0-2605419df774" Feb 03 10:22:58 crc kubenswrapper[5010]: E0203 10:22:58.878768 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-krnr5" podUID="b2780eb3-7b7a-47fe-bda0-2605419df774" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.408132 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.409135 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-lkm9t_openstack(05e75df7-a63f-4821-8aa1-79b20fe2e100): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.410456 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" podUID="05e75df7-a63f-4821-8aa1-79b20fe2e100" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.420487 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.420666 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k9cm6_openstack(6fec8d31-6436-4bfa-aae8-154ca2b74cf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.421906 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" podUID="6fec8d31-6436-4bfa-aae8-154ca2b74cf2" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.465461 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.465651 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64qtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-g56qr_openstack(e75b7259-a771-487b-9d36-990ce8571c11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.467021 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" podUID="e75b7259-a771-487b-9d36-990ce8571c11" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.957164 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.957715 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch64ch98h5fhbch679h649h548h55h5f4h5c8h7fh686h677h5c5h5bh5b7h657h67dh58bh77h68ch76h564h9h5fch5f7hb8h54ch649h98h74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7xp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ql6ht_openstack(1883c30e-4c38-468d-a5dc-91b07f167d67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.959364 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.995412 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.995587 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29t54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-kpzlc_openstack(86085e66-cdd4-45aa-af20-f8856cdfed1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:23:04 crc kubenswrapper[5010]: E0203 10:23:04.997349 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" Feb 03 10:23:05 crc kubenswrapper[5010]: E0203 10:23:05.277712 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" Feb 03 10:23:05 crc kubenswrapper[5010]: E0203 10:23:05.277983 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" Feb 03 10:23:05 crc kubenswrapper[5010]: I0203 10:23:05.310016 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 10:23:05 crc kubenswrapper[5010]: I0203 10:23:05.779984 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqkq5"] Feb 03 10:23:06 crc kubenswrapper[5010]: W0203 10:23:06.068890 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5235b9fc_3723_4d8a_9851_e8ee89c0b084.slice/crio-9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6 WatchSource:0}: Error finding container 9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6: Status 404 returned error can't find the container with id 9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6 Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.125911 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143640 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143690 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.143727 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") pod \"e75b7259-a771-487b-9d36-990ce8571c11\" (UID: \"e75b7259-a771-487b-9d36-990ce8571c11\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.144302 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config" (OuterVolumeSpecName: "config") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.144323 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.149527 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv" (OuterVolumeSpecName: "kube-api-access-64qtv") pod "e75b7259-a771-487b-9d36-990ce8571c11" (UID: "e75b7259-a771-487b-9d36-990ce8571c11"). InnerVolumeSpecName "kube-api-access-64qtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.244767 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64qtv\" (UniqueName: \"kubernetes.io/projected/e75b7259-a771-487b-9d36-990ce8571c11-kube-api-access-64qtv\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.245065 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.245083 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e75b7259-a771-487b-9d36-990ce8571c11-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.282804 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.282847 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g56qr" event={"ID":"e75b7259-a771-487b-9d36-990ce8571c11","Type":"ContainerDied","Data":"474180be2209d7238391d27eab7728591f11004bc751b0c6114b9196608f8e03"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.284840 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqkq5" event={"ID":"5235b9fc-3723-4d8a-9851-e8ee89c0b084","Type":"ContainerStarted","Data":"9996fa3b5dd316c984d433b961f88b86fa6cb581820080df11cf29f09af4b0d6"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.285639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"b93c74370db9b0aef0337572f57615b0154fd2eb16769fa4ad2086643a06821a"} Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.346167 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.355649 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g56qr"] Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.466593 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.468792 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.512987 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75b7259-a771-487b-9d36-990ce8571c11" path="/var/lib/kubelet/pods/e75b7259-a771-487b-9d36-990ce8571c11/volumes" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.649491 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") pod \"05e75df7-a63f-4821-8aa1-79b20fe2e100\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config" (OuterVolumeSpecName: "config") pod "05e75df7-a63f-4821-8aa1-79b20fe2e100" (UID: "05e75df7-a63f-4821-8aa1-79b20fe2e100"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650711 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") pod \"05e75df7-a63f-4821-8aa1-79b20fe2e100\" (UID: \"05e75df7-a63f-4821-8aa1-79b20fe2e100\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.650772 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config" (OuterVolumeSpecName: "config") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651482 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.651530 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") pod \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\" (UID: \"6fec8d31-6436-4bfa-aae8-154ca2b74cf2\") " Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.652636 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653964 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653987 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e75df7-a63f-4821-8aa1-79b20fe2e100-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.653996 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.655399 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt" (OuterVolumeSpecName: "kube-api-access-4cjqt") pod "6fec8d31-6436-4bfa-aae8-154ca2b74cf2" (UID: "6fec8d31-6436-4bfa-aae8-154ca2b74cf2"). InnerVolumeSpecName "kube-api-access-4cjqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.655446 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69" (OuterVolumeSpecName: "kube-api-access-hrz69") pod "05e75df7-a63f-4821-8aa1-79b20fe2e100" (UID: "05e75df7-a63f-4821-8aa1-79b20fe2e100"). InnerVolumeSpecName "kube-api-access-hrz69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.755437 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrz69\" (UniqueName: \"kubernetes.io/projected/05e75df7-a63f-4821-8aa1-79b20fe2e100-kube-api-access-hrz69\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:06 crc kubenswrapper[5010]: I0203 10:23:06.755473 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjqt\" (UniqueName: \"kubernetes.io/projected/6fec8d31-6436-4bfa-aae8-154ca2b74cf2-kube-api-access-4cjqt\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.295964 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.295989 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k9cm6" event={"ID":"6fec8d31-6436-4bfa-aae8-154ca2b74cf2","Type":"ContainerDied","Data":"d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9"} Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.298102 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" event={"ID":"05e75df7-a63f-4821-8aa1-79b20fe2e100","Type":"ContainerDied","Data":"9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59"} Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.298693 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-lkm9t" Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.357011 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.362251 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k9cm6"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.381337 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:23:07 crc kubenswrapper[5010]: I0203 10:23:07.387268 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-lkm9t"] Feb 03 10:23:07 crc kubenswrapper[5010]: E0203 10:23:07.410981 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05e75df7_a63f_4821_8aa1_79b20fe2e100.slice/crio-9e3776a5d3f524e0c405d299c28cd32959ccfee9a9abe7e9369d1c2023e2ff59\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05e75df7_a63f_4821_8aa1_79b20fe2e100.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fec8d31_6436_4bfa_aae8_154ca2b74cf2.slice/crio-d7f9681b86e8830df0ea7e53a19e40fbea0d9f1b8f5d34f7c2f7074013fa6ad9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fec8d31_6436_4bfa_aae8_154ca2b74cf2.slice\": RecentStats: unable to find data in memory cache]" Feb 03 10:23:08 crc kubenswrapper[5010]: I0203 10:23:08.512669 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e75df7-a63f-4821-8aa1-79b20fe2e100" path="/var/lib/kubelet/pods/05e75df7-a63f-4821-8aa1-79b20fe2e100/volumes" Feb 03 10:23:08 crc kubenswrapper[5010]: I0203 10:23:08.513493 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fec8d31-6436-4bfa-aae8-154ca2b74cf2" path="/var/lib/kubelet/pods/6fec8d31-6436-4bfa-aae8-154ca2b74cf2/volumes" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.314774 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"987e0ad36e6e1c0af04f4ea300830c129ba933216eb5a6c0730fd3baf74641f5"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.315166 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6dfa0a64-db8a-457a-8eff-f27ffa8e02ce","Type":"ContainerStarted","Data":"26a790849fba98e2c0a6b6980ba93bb6a65ba48773df7fc18dc3719486a99aa6"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.320392 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerID="dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930" exitCode=0 Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.320740 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.323565 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqkq5" event={"ID":"5235b9fc-3723-4d8a-9851-e8ee89c0b084","Type":"ContainerStarted","Data":"d91898ed898aabcde5ef7805055788efffab2baa30ab6b08b03c958818960ece"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.328438 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"6b569330655568b61d54a1a1c7cb51f6293c0fbdb3c0638d49c43584d6d27ab4"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.328499 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6d6abf1f-9905-4f96-8d44-d7ef3f9f299d","Type":"ContainerStarted","Data":"1e0c95cdf7c43e6e556f539bafd04b2edd3c565bf3490a19b52e90dc365be45e"} Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.341286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.018237232 podStartE2EDuration="30.341263537s" podCreationTimestamp="2026-02-03 10:22:39 +0000 UTC" firstStartedPulling="2026-02-03 10:23:05.762654883 +0000 UTC m=+1255.918631012" lastFinishedPulling="2026-02-03 10:23:08.085681188 +0000 UTC m=+1258.241657317" observedRunningTime="2026-02-03 10:23:09.340425435 +0000 UTC m=+1259.496401564" watchObservedRunningTime="2026-02-03 10:23:09.341263537 +0000 UTC m=+1259.497239686" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.372816 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.595444352 podStartE2EDuration="33.372791599s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:46.638021372 +0000 UTC m=+1236.793997501" lastFinishedPulling="2026-02-03 10:23:06.415368619 +0000 UTC m=+1256.571344748" observedRunningTime="2026-02-03 10:23:09.36932573 +0000 UTC m=+1259.525301869" watchObservedRunningTime="2026-02-03 10:23:09.372791599 +0000 UTC m=+1259.528767748" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.413658 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vqkq5" podStartSLOduration=28.334409892 podStartE2EDuration="30.413636352s" podCreationTimestamp="2026-02-03 10:22:39 +0000 UTC" firstStartedPulling="2026-02-03 10:23:06.091368427 +0000 UTC m=+1256.247344556" lastFinishedPulling="2026-02-03 10:23:08.170594887 +0000 UTC m=+1258.326571016" observedRunningTime="2026-02-03 10:23:09.409880495 +0000 UTC m=+1259.565856634" watchObservedRunningTime="2026-02-03 10:23:09.413636352 +0000 UTC m=+1259.569612491" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.805104 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.868889 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.870547 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.876106 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 10:23:09 crc kubenswrapper[5010]: I0203 10:23:09.877042 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024701 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024881 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.024935 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.025062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.025121 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126718 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.126780 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.128694 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.129072 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.129975 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.130196 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.148479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"dnsmasq-dns-86db49b7ff-bsmfs\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.204105 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.340189 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerStarted","Data":"2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7"} Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.361742 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" podStartSLOduration=20.729328716 podStartE2EDuration="30.361717232s" podCreationTimestamp="2026-02-03 10:22:40 +0000 UTC" firstStartedPulling="2026-02-03 10:22:57.024535699 +0000 UTC m=+1247.180511818" lastFinishedPulling="2026-02-03 10:23:06.656924205 +0000 UTC m=+1256.812900334" observedRunningTime="2026-02-03 10:23:10.359745052 +0000 UTC m=+1260.515721181" watchObservedRunningTime="2026-02-03 10:23:10.361717232 +0000 UTC m=+1260.517693361" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.467380 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.535753 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.536334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.536404 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") pod \"86085e66-cdd4-45aa-af20-f8856cdfed1c\" (UID: \"86085e66-cdd4-45aa-af20-f8856cdfed1c\") " Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.540660 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config" (OuterVolumeSpecName: "config") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.541574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.591493 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54" (OuterVolumeSpecName: "kube-api-access-29t54") pod "86085e66-cdd4-45aa-af20-f8856cdfed1c" (UID: "86085e66-cdd4-45aa-af20-f8856cdfed1c"). InnerVolumeSpecName "kube-api-access-29t54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.642744 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29t54\" (UniqueName: \"kubernetes.io/projected/86085e66-cdd4-45aa-af20-f8856cdfed1c-kube-api-access-29t54\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.643192 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.643204 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86085e66-cdd4-45aa-af20-f8856cdfed1c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.646617 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:10 crc kubenswrapper[5010]: I0203 10:23:10.873819 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.065453 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.130428 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.349723 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerStarted","Data":"8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.349857 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.351379 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerStarted","Data":"098a23dc68ddad3e911c76c8c4f89d48833726d8e974b792fb670b88ee30cae7"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.351408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" event={"ID":"86085e66-cdd4-45aa-af20-f8856cdfed1c","Type":"ContainerDied","Data":"e7f926e73e67c36bc02fcc6793463e0a1d4e2f826cfb6f5739264417666543a5"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.354499 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95adc2d1-1093-484e-8580-53e244b420c8","Type":"ContainerStarted","Data":"54b122c9dd1ed2e74e27738123169f3f9ae6b63c80ddae2d33dd5ab19170ae9c"} Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355443 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kpzlc" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355563 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.355582 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.373306 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.078888672 podStartE2EDuration="38.37326223s" podCreationTimestamp="2026-02-03 10:22:33 +0000 UTC" firstStartedPulling="2026-02-03 10:22:35.297815846 +0000 UTC m=+1225.453791985" lastFinishedPulling="2026-02-03 10:23:10.592189414 +0000 UTC m=+1260.748165543" observedRunningTime="2026-02-03 10:23:11.371436213 +0000 UTC m=+1261.527412342" watchObservedRunningTime="2026-02-03 10:23:11.37326223 +0000 UTC m=+1261.529238369" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391252 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391340 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.391422 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.726790818 podStartE2EDuration="41.391414018s" podCreationTimestamp="2026-02-03 10:22:30 +0000 UTC" firstStartedPulling="2026-02-03 10:22:32.945117336 +0000 UTC m=+1223.101093455" lastFinishedPulling="2026-02-03 10:23:10.609740526 +0000 UTC m=+1260.765716655" observedRunningTime="2026-02-03 10:23:11.387516497 +0000 UTC m=+1261.543492616" watchObservedRunningTime="2026-02-03 10:23:11.391414018 +0000 UTC m=+1261.547390147" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.453011 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.640081 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:11 crc kubenswrapper[5010]: I0203 10:23:11.647441 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kpzlc"] Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.396591 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.399917 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.402684 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.403861 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.405285 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.427049 5010 generic.go:334] "Generic (PLEG): container finished" podID="794d29fd-0784-4f8c-8f62-e6753d046def" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" exitCode=0 Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.429941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b"} Feb 03 10:23:12 crc kubenswrapper[5010]: I0203 10:23:12.549819 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86085e66-cdd4-45aa-af20-f8856cdfed1c" path="/var/lib/kubelet/pods/86085e66-cdd4-45aa-af20-f8856cdfed1c/volumes" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.111810 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.437021 5010 generic.go:334] "Generic (PLEG): container finished" podID="b2780eb3-7b7a-47fe-bda0-2605419df774" containerID="70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f" exitCode=0 Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.437129 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerDied","Data":"70afa1a572760d9bf687091b456c83d50fa5b5467491f14c5c72b196b76b069f"} Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.491039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.739784 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.741674 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.746660 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.747403 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.748119 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kv5g5" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.748368 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.770665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.831727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832818 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.832947 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.833122 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.833260 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935257 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935331 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935378 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935470 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935530 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:13 crc kubenswrapper[5010]: I0203 10:23:13.935552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.022786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.022830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-scripts\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.023322 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5158e153-9918-4fce-8f2f-75a87b96562b-config\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.029638 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.030552 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzn5\" (UniqueName: \"kubernetes.io/projected/5158e153-9918-4fce-8f2f-75a87b96562b-kube-api-access-bvzn5\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.039058 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.044121 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5158e153-9918-4fce-8f2f-75a87b96562b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5158e153-9918-4fce-8f2f-75a87b96562b\") " pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.129955 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 10:23:14 crc kubenswrapper[5010]: I0203 10:23:14.965064 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 10:23:15 crc kubenswrapper[5010]: I0203 10:23:15.475661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"f96432e5c31a342fc4cc5216702bc5bb620c8632cf8e19b7798bd166fe95782b"} Feb 03 10:23:15 crc kubenswrapper[5010]: I0203 10:23:15.647383 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:16 crc kubenswrapper[5010]: I0203 10:23:16.462737 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 10:23:17 crc kubenswrapper[5010]: I0203 10:23:17.495658 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerStarted","Data":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} Feb 03 10:23:18 crc kubenswrapper[5010]: I0203 10:23:18.515769 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"9d2da86adba088a1f50922b1889760440b7d60b277c4fb7e78d3c58c7765ecc8"} Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.538262 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-krnr5" event={"ID":"b2780eb3-7b7a-47fe-bda0-2605419df774","Type":"ContainerStarted","Data":"81b70fba5bf3aa691c2b035a1c743357c9f13960700d426f13526599108aa833"} Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.538754 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.565983 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-krnr5" podStartSLOduration=18.284626586999998 podStartE2EDuration="43.565961832s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:46.637961741 +0000 UTC m=+1236.793937860" lastFinishedPulling="2026-02-03 10:23:11.919296976 +0000 UTC m=+1262.075273105" observedRunningTime="2026-02-03 10:23:19.562920054 +0000 UTC m=+1269.718896193" watchObservedRunningTime="2026-02-03 10:23:19.565961832 +0000 UTC m=+1269.721937961" Feb 03 10:23:19 crc kubenswrapper[5010]: I0203 10:23:19.586139 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" podStartSLOduration=10.586116852 podStartE2EDuration="10.586116852s" podCreationTimestamp="2026-02-03 10:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:19.578051934 +0000 UTC m=+1269.734028063" watchObservedRunningTime="2026-02-03 10:23:19.586116852 +0000 UTC m=+1269.742092991" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.548796 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"f0d988c7c6bfff8238bc3a032a99f23a54a66a18d264bfaa3fc707cba7ce94d0"} Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549618 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5158e153-9918-4fce-8f2f-75a87b96562b","Type":"ContainerStarted","Data":"cf5c0a250f0c0d83ecc5d50cfadc68dcae7bff5a6f435d8df4e66a85d0d0825b"} Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549654 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.549797 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 10:23:20 crc kubenswrapper[5010]: I0203 10:23:20.568041 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.920256399 podStartE2EDuration="7.568026485s" podCreationTimestamp="2026-02-03 10:23:13 +0000 UTC" firstStartedPulling="2026-02-03 10:23:14.981763955 +0000 UTC m=+1265.137740084" lastFinishedPulling="2026-02-03 10:23:19.629534031 +0000 UTC m=+1269.785510170" observedRunningTime="2026-02-03 10:23:20.56475233 +0000 UTC m=+1270.720728459" watchObservedRunningTime="2026-02-03 10:23:20.568026485 +0000 UTC m=+1270.724002614" Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.556542 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht" event={"ID":"1883c30e-4c38-468d-a5dc-91b07f167d67","Type":"ContainerStarted","Data":"5061f0de98754bb7f6cbb3fd8c116e1df2bc405232c5873037db4f0594aacf56"} Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.557150 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ql6ht" Feb 03 10:23:21 crc kubenswrapper[5010]: I0203 10:23:21.578715 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ql6ht" podStartSLOduration=2.407311167 podStartE2EDuration="45.578698379s" podCreationTimestamp="2026-02-03 10:22:36 +0000 UTC" firstStartedPulling="2026-02-03 10:22:37.855019019 +0000 UTC m=+1228.010995148" lastFinishedPulling="2026-02-03 10:23:21.026406231 +0000 UTC m=+1271.182382360" observedRunningTime="2026-02-03 10:23:21.576453601 +0000 UTC m=+1271.732429740" watchObservedRunningTime="2026-02-03 10:23:21.578698379 +0000 UTC m=+1271.734674518" Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.565858 5010 generic.go:334] "Generic (PLEG): container finished" podID="87eb5dd8-7171-457a-8a95-eda98893319a" containerID="836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45" exitCode=0 Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.565931 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerDied","Data":"836d10e13031e1e589fd13f0fcda7b9cdf717cf593196a23ab06f2b0deb83c45"} Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.568202 5010 generic.go:334] "Generic (PLEG): container finished" podID="449f0b91-9186-4a16-b1b4-7f199b57a428" containerID="d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1" exitCode=0 Feb 03 10:23:22 crc kubenswrapper[5010]: I0203 10:23:22.568469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerDied","Data":"d59aa2650c8950a2a4ba7a7dc97b6834e3c2613e89263135debc00e0122c70c1"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.578430 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"449f0b91-9186-4a16-b1b4-7f199b57a428","Type":"ContainerStarted","Data":"33659d6b8b33f4fa83e7ee3b9ea84cc7b2b68df78d0c3e36845ed0496dbd20ef"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.579968 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"87eb5dd8-7171-457a-8a95-eda98893319a","Type":"ContainerStarted","Data":"a167c7205f060547e741424f98d3ed27bfb2810e1d67d3b95c95dc0aa8fcf4d7"} Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.601743 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.070981453 podStartE2EDuration="56.601726732s" podCreationTimestamp="2026-02-03 10:22:27 +0000 UTC" firstStartedPulling="2026-02-03 10:22:30.901557893 +0000 UTC m=+1221.057534022" lastFinishedPulling="2026-02-03 10:23:11.432303172 +0000 UTC m=+1261.588279301" observedRunningTime="2026-02-03 10:23:23.599597107 +0000 UTC m=+1273.755573276" watchObservedRunningTime="2026-02-03 10:23:23.601726732 +0000 UTC m=+1273.757702861" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.625697 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.199340962 podStartE2EDuration="54.625673559s" podCreationTimestamp="2026-02-03 10:22:29 +0000 UTC" firstStartedPulling="2026-02-03 10:22:32.619633455 +0000 UTC m=+1222.775609584" lastFinishedPulling="2026-02-03 10:23:11.045966052 +0000 UTC m=+1261.201942181" observedRunningTime="2026-02-03 10:23:23.624044327 +0000 UTC m=+1273.780020486" watchObservedRunningTime="2026-02-03 10:23:23.625673559 +0000 UTC m=+1273.781649728" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.847686 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.848026 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" containerID="cri-o://a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" gracePeriod=10 Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.849382 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.894356 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.896336 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:23 crc kubenswrapper[5010]: I0203 10:23:23.904520 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050758 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050832 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050862 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.050984 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.051066 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.118662 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155407 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.155498 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156446 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156636 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.156637 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.157699 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.157737 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.175669 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"dnsmasq-dns-698758b865-c5kgf\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.310715 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.420665 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562779 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562806 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562911 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.562996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") pod \"794d29fd-0784-4f8c-8f62-e6753d046def\" (UID: \"794d29fd-0784-4f8c-8f62-e6753d046def\") " Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.569425 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm" (OuterVolumeSpecName: "kube-api-access-hf9dm") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "kube-api-access-hf9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590508 5010 generic.go:334] "Generic (PLEG): container finished" podID="794d29fd-0784-4f8c-8f62-e6753d046def" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" exitCode=0 Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590556 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590608 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" event={"ID":"794d29fd-0784-4f8c-8f62-e6753d046def","Type":"ContainerDied","Data":"098a23dc68ddad3e911c76c8c4f89d48833726d8e974b792fb670b88ee30cae7"} Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590629 5010 scope.go:117] "RemoveContainer" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.590681 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bsmfs" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.607485 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.609768 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.610499 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config" (OuterVolumeSpecName: "config") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.616615 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "794d29fd-0784-4f8c-8f62-e6753d046def" (UID: "794d29fd-0784-4f8c-8f62-e6753d046def"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.623070 5010 scope.go:117] "RemoveContainer" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.643644 5010 scope.go:117] "RemoveContainer" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.645249 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": container with ID starting with a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee not found: ID does not exist" containerID="a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645306 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee"} err="failed to get container status \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": rpc error: code = NotFound desc = could not find container \"a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee\": container with ID starting with a595523ad518ad011fa5338dd79d517e4eef6c82eeb095bd67d521e13f2ea5ee not found: ID does not exist" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645335 5010 scope.go:117] "RemoveContainer" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.645884 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": container with ID starting with ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b not found: ID does not exist" containerID="ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.645907 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b"} err="failed to get container status \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": rpc error: code = NotFound desc = could not find container \"ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b\": container with ID starting with ad60373bd6b641bdb33a2fff90fcd46aff2b4465391eb58f2ee5896ab0a4f83b not found: ID does not exist" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664755 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf9dm\" (UniqueName: \"kubernetes.io/projected/794d29fd-0784-4f8c-8f62-e6753d046def-kube-api-access-hf9dm\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664794 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664810 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664822 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.664832 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/794d29fd-0784-4f8c-8f62-e6753d046def-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.765723 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.930436 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.936521 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bsmfs"] Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.975850 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.976179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="init" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.976194 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="init" Feb 03 10:23:24 crc kubenswrapper[5010]: E0203 10:23:24.980044 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.980077 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.980438 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" containerName="dnsmasq-dns" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.987477 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.990831 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z59t4" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991019 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991127 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.991273 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 10:23:24 crc kubenswrapper[5010]: I0203 10:23:24.994196 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078541 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078561 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078634 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.078662 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180646 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180763 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.180843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.180986 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.181015 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.181066 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:25.681047205 +0000 UTC m=+1275.837023334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181352 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181441 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-lock\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.181592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b58c504-f707-43fe-91ca-4328c58e998c-cache\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.204517 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b58c504-f707-43fe-91ca-4328c58e998c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.207001 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp84n\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-kube-api-access-wp84n\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.238399 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599669 5010 generic.go:334] "Generic (PLEG): container finished" podID="44cce4a6-14dd-4b2d-9473-49edee803476" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" exitCode=0 Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599749 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf"} Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.599778 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerStarted","Data":"7b4cc9746175c611db5edf3a8b25a3610c6d4de7b21e5812358190938f2ecfc7"} Feb 03 10:23:25 crc kubenswrapper[5010]: I0203 10:23:25.690359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.690912 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.691051 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:25 crc kubenswrapper[5010]: E0203 10:23:25.691269 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:26.691241718 +0000 UTC m=+1276.847217847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.511946 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794d29fd-0784-4f8c-8f62-e6753d046def" path="/var/lib/kubelet/pods/794d29fd-0784-4f8c-8f62-e6753d046def/volumes" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.613287 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerStarted","Data":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.613452 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.636900 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-c5kgf" podStartSLOduration=3.636879476 podStartE2EDuration="3.636879476s" podCreationTimestamp="2026-02-03 10:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:26.628389007 +0000 UTC m=+1276.784365156" watchObservedRunningTime="2026-02-03 10:23:26.636879476 +0000 UTC m=+1276.792855605" Feb 03 10:23:26 crc kubenswrapper[5010]: I0203 10:23:26.707581 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707749 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707782 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:26 crc kubenswrapper[5010]: E0203 10:23:26.707847 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:28.707825725 +0000 UTC m=+1278.863801914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.740301 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740483 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740708 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: E0203 10:23:28.740759 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:32.740742482 +0000 UTC m=+1282.896718611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.934927 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.936040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938097 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938282 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.938492 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 10:23:28 crc kubenswrapper[5010]: I0203 10:23:28.946410 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.045911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.045964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046035 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046312 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.046394 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.147953 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148017 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148044 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148106 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148145 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.148234 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149108 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149448 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.149856 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.153586 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.153734 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.160066 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.166628 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"swift-ring-rebalance-n8qtn\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.291126 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.562560 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.562852 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.656790 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.735018 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 10:23:29 crc kubenswrapper[5010]: I0203 10:23:29.752444 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n8qtn"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.642673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerStarted","Data":"05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38"} Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.850985 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.853782 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.858488 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.875498 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.960055 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.961412 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.972458 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.981068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:30 crc kubenswrapper[5010]: I0203 10:23:30.981415 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.065818 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.067022 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.076288 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.082916 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.083199 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.087059 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.119063 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"keystone-caa6-account-create-update-69sjp\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.183232 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.184267 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.184591 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.186117 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187635 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.187842 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.188708 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.197822 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.200629 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.200715 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.206817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"keystone-db-create-nh655\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.286152 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.289682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290597 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290672 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.290737 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.291154 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.307071 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"placement-db-create-9qjk8\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.321091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.392229 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.392287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.393086 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.394631 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.408266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"placement-3037-account-create-update-847d2\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.688338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:31 crc kubenswrapper[5010]: I0203 10:23:31.826680 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.170434 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.177937 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.345281 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:23:32 crc kubenswrapper[5010]: I0203 10:23:32.821474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821708 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821737 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:32 crc kubenswrapper[5010]: E0203 10:23:32.821798 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:40.821780778 +0000 UTC m=+1290.977756907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:33 crc kubenswrapper[5010]: W0203 10:23:33.917685 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf6f6f7_d993_486c_9dcf_63d6b298f898.slice/crio-c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03 WatchSource:0}: Error finding container c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03: Status 404 returned error can't find the container with id c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03 Feb 03 10:23:33 crc kubenswrapper[5010]: W0203 10:23:33.919086 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6faff8_cfd9_4253_8dc3_d3df2b3252be.slice/crio-d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0 WatchSource:0}: Error finding container d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0: Status 404 returned error can't find the container with id d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.266622 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.312580 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.326635 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.394511 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.394748 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" containerID="cri-o://2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" gracePeriod=10 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.743610 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerStarted","Data":"867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.743661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerStarted","Data":"c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.748238 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerStarted","Data":"d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.750514 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerStarted","Data":"e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.750562 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerStarted","Data":"d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.753789 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerStarted","Data":"ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.753841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerStarted","Data":"11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756065 5010 generic.go:334] "Generic (PLEG): container finished" podID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerID="783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b" exitCode=0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756138 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerDied","Data":"783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.756167 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerStarted","Data":"b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.761639 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerID="2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" exitCode=0 Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.761703 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7"} Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.779504 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3037-account-create-update-847d2" podStartSLOduration=3.779486167 podStartE2EDuration="3.779486167s" podCreationTimestamp="2026-02-03 10:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:34.775858573 +0000 UTC m=+1284.931834712" watchObservedRunningTime="2026-02-03 10:23:34.779486167 +0000 UTC m=+1284.935462296" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.804413 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n8qtn" podStartSLOduration=2.480469292 podStartE2EDuration="6.804394629s" podCreationTimestamp="2026-02-03 10:23:28 +0000 UTC" firstStartedPulling="2026-02-03 10:23:29.762477952 +0000 UTC m=+1279.918454081" lastFinishedPulling="2026-02-03 10:23:34.086403289 +0000 UTC m=+1284.242379418" observedRunningTime="2026-02-03 10:23:34.798162518 +0000 UTC m=+1284.954138657" watchObservedRunningTime="2026-02-03 10:23:34.804394629 +0000 UTC m=+1284.960370758" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.823758 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-caa6-account-create-update-69sjp" podStartSLOduration=4.823725787 podStartE2EDuration="4.823725787s" podCreationTimestamp="2026-02-03 10:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:34.821762617 +0000 UTC m=+1284.977738756" watchObservedRunningTime="2026-02-03 10:23:34.823725787 +0000 UTC m=+1284.979701916" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.893303 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.989965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990180 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990244 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.990281 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") pod \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\" (UID: \"3ea6e430-f9a6-4850-b58e-24ac04fd49a2\") " Feb 03 10:23:34 crc kubenswrapper[5010]: I0203 10:23:34.995563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz" (OuterVolumeSpecName: "kube-api-access-59czz") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "kube-api-access-59czz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.035128 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config" (OuterVolumeSpecName: "config") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.035978 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.050501 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ea6e430-f9a6-4850-b58e-24ac04fd49a2" (UID: "3ea6e430-f9a6-4850-b58e-24ac04fd49a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092723 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092762 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092774 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.092786 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59czz\" (UniqueName: \"kubernetes.io/projected/3ea6e430-f9a6-4850-b58e-24ac04fd49a2-kube-api-access-59czz\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.782634 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerID="ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.782731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerDied","Data":"ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.785916 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.785908 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-84hts" event={"ID":"3ea6e430-f9a6-4850-b58e-24ac04fd49a2","Type":"ContainerDied","Data":"b237a98e3b61244f5b8cbba9933237b1c87653782e7c801f5d548e23ebd2e6d6"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.786069 5010 scope.go:117] "RemoveContainer" containerID="2a39e93057d80e1a2e85ebc3a8a730552d12cf63e0e15cf7d8339a09d27bdab7" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.787801 5010 generic.go:334] "Generic (PLEG): container finished" podID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerID="867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.787869 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerDied","Data":"867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.790801 5010 generic.go:334] "Generic (PLEG): container finished" podID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerID="e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154" exitCode=0 Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.791011 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerDied","Data":"e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154"} Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.815714 5010 scope.go:117] "RemoveContainer" containerID="dcafe9c15b252f4afce63db43717e61b273dee3af36eabf6852fd51f8f27c930" Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.858708 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:35 crc kubenswrapper[5010]: I0203 10:23:35.867473 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-84hts"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.197356 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.286557 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.326738 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") pod \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.326976 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") pod \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\" (UID: \"7cf6f6f7-d993-486c-9dcf-63d6b298f898\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.327758 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cf6f6f7-d993-486c-9dcf-63d6b298f898" (UID: "7cf6f6f7-d993-486c-9dcf-63d6b298f898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.332151 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c" (OuterVolumeSpecName: "kube-api-access-z462c") pod "7cf6f6f7-d993-486c-9dcf-63d6b298f898" (UID: "7cf6f6f7-d993-486c-9dcf-63d6b298f898"). InnerVolumeSpecName "kube-api-access-z462c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428116 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") pod \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428256 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") pod \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\" (UID: \"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996\") " Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428595 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" (UID: "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428722 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cf6f6f7-d993-486c-9dcf-63d6b298f898-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428741 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z462c\" (UniqueName: \"kubernetes.io/projected/7cf6f6f7-d993-486c-9dcf-63d6b298f898-kube-api-access-z462c\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.428751 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.431058 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk" (OuterVolumeSpecName: "kube-api-access-cxnpk") pod "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" (UID: "b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996"). InnerVolumeSpecName "kube-api-access-cxnpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.510128 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" path="/var/lib/kubelet/pods/3ea6e430-f9a6-4850-b58e-24ac04fd49a2/volumes" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.529826 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxnpk\" (UniqueName: \"kubernetes.io/projected/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996-kube-api-access-cxnpk\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.620401 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.620998 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621034 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621059 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621071 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621122 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="init" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621134 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="init" Feb 03 10:23:36 crc kubenswrapper[5010]: E0203 10:23:36.621164 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621175 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621461 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea6e430-f9a6-4850-b58e-24ac04fd49a2" containerName="dnsmasq-dns" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621507 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.621528 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" containerName="mariadb-database-create" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.622357 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.631326 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.706841 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.708529 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.710444 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.715411 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.734959 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.735058 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799478 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nh655" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799469 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nh655" event={"ID":"7cf6f6f7-d993-486c-9dcf-63d6b298f898","Type":"ContainerDied","Data":"c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03"} Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.799619 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42abc7a8375b4b278fa745e2a4991ab20a2e2a586627dc3875627dcc3f98e03" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800724 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qjk8" event={"ID":"b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996","Type":"ContainerDied","Data":"b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04"} Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800757 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2840ce53bec4b0c6c02ff134b8c0fd5257ca07c7ed9500554ece5a4a25bfa04" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.800803 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qjk8" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.837989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838323 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.838398 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.839795 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.858197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"glance-db-create-g8ncl\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.938342 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.939810 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.939860 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.940619 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:36 crc kubenswrapper[5010]: I0203 10:23:36.959485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"glance-06a9-account-create-update-764vb\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.033801 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.311704 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.319703 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346731 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") pod \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346863 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") pod \"9e03bfed-c1c6-4165-86c0-6c1415a30081\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") pod \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\" (UID: \"9a6faff8-cfd9-4253-8dc3-d3df2b3252be\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.346986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") pod \"9e03bfed-c1c6-4165-86c0-6c1415a30081\" (UID: \"9e03bfed-c1c6-4165-86c0-6c1415a30081\") " Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.347878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a6faff8-cfd9-4253-8dc3-d3df2b3252be" (UID: "9a6faff8-cfd9-4253-8dc3-d3df2b3252be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.348384 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e03bfed-c1c6-4165-86c0-6c1415a30081" (UID: "9e03bfed-c1c6-4165-86c0-6c1415a30081"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.353053 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx" (OuterVolumeSpecName: "kube-api-access-zn6zx") pod "9e03bfed-c1c6-4165-86c0-6c1415a30081" (UID: "9e03bfed-c1c6-4165-86c0-6c1415a30081"). InnerVolumeSpecName "kube-api-access-zn6zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.354047 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc" (OuterVolumeSpecName: "kube-api-access-v4bdc") pod "9a6faff8-cfd9-4253-8dc3-d3df2b3252be" (UID: "9a6faff8-cfd9-4253-8dc3-d3df2b3252be"). InnerVolumeSpecName "kube-api-access-v4bdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448528 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bdc\" (UniqueName: \"kubernetes.io/projected/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-kube-api-access-v4bdc\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448567 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e03bfed-c1c6-4165-86c0-6c1415a30081-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448579 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6faff8-cfd9-4253-8dc3-d3df2b3252be-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.448589 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn6zx\" (UniqueName: \"kubernetes.io/projected/9e03bfed-c1c6-4165-86c0-6c1415a30081-kube-api-access-zn6zx\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.532552 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:23:37 crc kubenswrapper[5010]: W0203 10:23:37.538965 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0505d3aa_dab1_4f61_af12_69804ff1345a.slice/crio-25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f WatchSource:0}: Error finding container 25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f: Status 404 returned error can't find the container with id 25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.652570 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:23:37 crc kubenswrapper[5010]: W0203 10:23:37.654939 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d0be64_0307_43ee_9c2c_905f1d22c267.slice/crio-0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26 WatchSource:0}: Error finding container 0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26: Status 404 returned error can't find the container with id 0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26 Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763299 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:37 crc kubenswrapper[5010]: E0203 10:23:37.763635 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763653 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: E0203 10:23:37.763674 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763683 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763866 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.763883 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" containerName="mariadb-account-create-update" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.764464 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.766369 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.775869 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809120 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3037-account-create-update-847d2" event={"ID":"9e03bfed-c1c6-4165-86c0-6c1415a30081","Type":"ContainerDied","Data":"11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809171 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11adf87cd6aac6252b89e1d6e8378ed7df21239a139fae101d60fe883f287571" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.809175 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3037-account-create-update-847d2" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.810882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerStarted","Data":"5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.810946 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerStarted","Data":"25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.812053 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerStarted","Data":"0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814649 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-caa6-account-create-update-69sjp" event={"ID":"9a6faff8-cfd9-4253-8dc3-d3df2b3252be","Type":"ContainerDied","Data":"d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0"} Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814697 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7f00bf0640736d45f71cd118c2254b0787ce4238feabdee74b5da5d9ba600e0" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.814732 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-caa6-account-create-update-69sjp" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.838551 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-g8ncl" podStartSLOduration=1.8385316569999999 podStartE2EDuration="1.838531657s" podCreationTimestamp="2026-02-03 10:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:37.828111138 +0000 UTC m=+1287.984087267" watchObservedRunningTime="2026-02-03 10:23:37.838531657 +0000 UTC m=+1287.994507786" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.857582 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.857696 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.959165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.959259 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.962903 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:37 crc kubenswrapper[5010]: I0203 10:23:37.976164 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"root-account-create-update-nbvmd\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:38 crc kubenswrapper[5010]: E0203 10:23:38.003361 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6faff8_cfd9_4253_8dc3_d3df2b3252be.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0505d3aa_dab1_4f61_af12_69804ff1345a.slice/crio-5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.093094 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:38 crc kubenswrapper[5010]: W0203 10:23:38.511271 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55e89174_6261_4cf0_9d5a_a750c362b79a.slice/crio-359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1 WatchSource:0}: Error finding container 359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1: Status 404 returned error can't find the container with id 359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.518144 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.826750 5010 generic.go:334] "Generic (PLEG): container finished" podID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerID="7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5" exitCode=0 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.826824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerDied","Data":"7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.828692 5010 generic.go:334] "Generic (PLEG): container finished" podID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerID="5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270" exitCode=0 Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.828754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerDied","Data":"5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.830095 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerStarted","Data":"387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.830125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerStarted","Data":"359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1"} Feb 03 10:23:38 crc kubenswrapper[5010]: I0203 10:23:38.873547 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-nbvmd" podStartSLOduration=1.873531418 podStartE2EDuration="1.873531418s" podCreationTimestamp="2026-02-03 10:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:23:38.871937937 +0000 UTC m=+1289.027914076" watchObservedRunningTime="2026-02-03 10:23:38.873531418 +0000 UTC m=+1289.029507547" Feb 03 10:23:39 crc kubenswrapper[5010]: I0203 10:23:39.838288 5010 generic.go:334] "Generic (PLEG): container finished" podID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerID="387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d" exitCode=0 Feb 03 10:23:39 crc kubenswrapper[5010]: I0203 10:23:39.838372 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerDied","Data":"387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.227629 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.304000 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") pod \"e2d0be64-0307-43ee-9c2c-905f1d22c267\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.304169 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") pod \"e2d0be64-0307-43ee-9c2c-905f1d22c267\" (UID: \"e2d0be64-0307-43ee-9c2c-905f1d22c267\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.305592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2d0be64-0307-43ee-9c2c-905f1d22c267" (UID: "e2d0be64-0307-43ee-9c2c-905f1d22c267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.307847 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.310620 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf" (OuterVolumeSpecName: "kube-api-access-8pfxf") pod "e2d0be64-0307-43ee-9c2c-905f1d22c267" (UID: "e2d0be64-0307-43ee-9c2c-905f1d22c267"). InnerVolumeSpecName "kube-api-access-8pfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405017 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") pod \"0505d3aa-dab1-4f61-af12-69804ff1345a\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405140 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") pod \"0505d3aa-dab1-4f61-af12-69804ff1345a\" (UID: \"0505d3aa-dab1-4f61-af12-69804ff1345a\") " Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405668 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2d0be64-0307-43ee-9c2c-905f1d22c267-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405701 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pfxf\" (UniqueName: \"kubernetes.io/projected/e2d0be64-0307-43ee-9c2c-905f1d22c267-kube-api-access-8pfxf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.405664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0505d3aa-dab1-4f61-af12-69804ff1345a" (UID: "0505d3aa-dab1-4f61-af12-69804ff1345a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.408325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn" (OuterVolumeSpecName: "kube-api-access-jljbn") pod "0505d3aa-dab1-4f61-af12-69804ff1345a" (UID: "0505d3aa-dab1-4f61-af12-69804ff1345a"). InnerVolumeSpecName "kube-api-access-jljbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.507703 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jljbn\" (UniqueName: \"kubernetes.io/projected/0505d3aa-dab1-4f61-af12-69804ff1345a-kube-api-access-jljbn\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.507732 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0505d3aa-dab1-4f61-af12-69804ff1345a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852303 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-764vb" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852337 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-764vb" event={"ID":"e2d0be64-0307-43ee-9c2c-905f1d22c267","Type":"ContainerDied","Data":"0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.852383 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb74049b6a21c7420ab7d6a07b8da1c6271bb44e5f1f8d80798d530bdb69a26" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.857957 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-g8ncl" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.857954 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-g8ncl" event={"ID":"0505d3aa-dab1-4f61-af12-69804ff1345a","Type":"ContainerDied","Data":"25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f"} Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.858161 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fd6088ea16981c55151b34eeff70789b343789ceffc247fb3df94f61510c7f" Feb 03 10:23:40 crc kubenswrapper[5010]: I0203 10:23:40.914647 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914876 5010 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914906 5010 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 10:23:40 crc kubenswrapper[5010]: E0203 10:23:40.914976 5010 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift podName:4b58c504-f707-43fe-91ca-4328c58e998c nodeName:}" failed. No retries permitted until 2026-02-03 10:23:56.914955325 +0000 UTC m=+1307.070931454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift") pod "swift-storage-0" (UID: "4b58c504-f707-43fe-91ca-4328c58e998c") : configmap "swift-ring-files" not found Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.234227 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.319763 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") pod \"55e89174-6261-4cf0-9d5a-a750c362b79a\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.320012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") pod \"55e89174-6261-4cf0-9d5a-a750c362b79a\" (UID: \"55e89174-6261-4cf0-9d5a-a750c362b79a\") " Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.320492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55e89174-6261-4cf0-9d5a-a750c362b79a" (UID: "55e89174-6261-4cf0-9d5a-a750c362b79a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.324574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k" (OuterVolumeSpecName: "kube-api-access-sbx7k") pod "55e89174-6261-4cf0-9d5a-a750c362b79a" (UID: "55e89174-6261-4cf0-9d5a-a750c362b79a"). InnerVolumeSpecName "kube-api-access-sbx7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.424395 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55e89174-6261-4cf0-9d5a-a750c362b79a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.424437 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbx7k\" (UniqueName: \"kubernetes.io/projected/55e89174-6261-4cf0-9d5a-a750c362b79a-kube-api-access-sbx7k\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782119 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782448 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782464 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782477 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782484 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: E0203 10:23:41.782506 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782512 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782658 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782669 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" containerName="mariadb-account-create-update" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.782691 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" containerName="mariadb-database-create" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.783158 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.786728 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.786832 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.800759 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833225 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833295 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.833482 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866418 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbvmd" event={"ID":"55e89174-6261-4cf0-9d5a-a750c362b79a","Type":"ContainerDied","Data":"359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1"} Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866457 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359eb915fcc11a138878bf839336ac69436afb76a57b48e722008cd5e4965ce1" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.866512 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbvmd" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.875140 5010 generic.go:334] "Generic (PLEG): container finished" podID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerID="d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd" exitCode=0 Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.875179 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerDied","Data":"d6d0dcfaf8344c8474b2f870e0a3c246fba9c7b000a18b30741b2b813b8e10cd"} Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.935524 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.935968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.936143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.936385 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.940185 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.949124 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.953255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:41 crc kubenswrapper[5010]: I0203 10:23:41.956985 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"glance-db-sync-xlhhb\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.106533 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.605709 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:23:42 crc kubenswrapper[5010]: I0203 10:23:42.881692 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerStarted","Data":"46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699"} Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.197900 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257780 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257840 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.257883 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258008 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258098 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.258160 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") pod \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\" (UID: \"65c9ffaf-83e3-47c1-a1e8-b097b371ccec\") " Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.259071 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.260110 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.265270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j" (OuterVolumeSpecName: "kube-api-access-c7n9j") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "kube-api-access-c7n9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.267044 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.280081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts" (OuterVolumeSpecName: "scripts") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.281605 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.283438 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c9ffaf-83e3-47c1-a1e8-b097b371ccec" (UID: "65c9ffaf-83e3-47c1-a1e8-b097b371ccec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.361592 5010 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362483 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362538 5010 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362554 5010 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362570 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362590 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7n9j\" (UniqueName: \"kubernetes.io/projected/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-kube-api-access-c7n9j\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.362608 5010 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65c9ffaf-83e3-47c1-a1e8-b097b371ccec-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892694 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n8qtn" event={"ID":"65c9ffaf-83e3-47c1-a1e8-b097b371ccec","Type":"ContainerDied","Data":"05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38"} Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892737 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05528d7b25b91ddd2d6931ebb207234211817db001ec48df5c320eaf05808c38" Feb 03 10:23:43 crc kubenswrapper[5010]: I0203 10:23:43.892777 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n8qtn" Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.753346 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.759799 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nbvmd"] Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.904336 5010 generic.go:334] "Generic (PLEG): container finished" podID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerID="10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095" exitCode=0 Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.904425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095"} Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.911077 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" exitCode=0 Feb 03 10:23:44 crc kubenswrapper[5010]: I0203 10:23:44.911116 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.921777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerStarted","Data":"602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.922312 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.924065 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerStarted","Data":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.924686 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.977141 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.387886458 podStartE2EDuration="1m19.977122243s" podCreationTimestamp="2026-02-03 10:22:26 +0000 UTC" firstStartedPulling="2026-02-03 10:22:29.023036027 +0000 UTC m=+1219.179012156" lastFinishedPulling="2026-02-03 10:23:10.612271802 +0000 UTC m=+1260.768247941" observedRunningTime="2026-02-03 10:23:45.948554847 +0000 UTC m=+1296.104530976" watchObservedRunningTime="2026-02-03 10:23:45.977122243 +0000 UTC m=+1296.133098362" Feb 03 10:23:45 crc kubenswrapper[5010]: I0203 10:23:45.977286 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.752866187 podStartE2EDuration="1m19.977282597s" podCreationTimestamp="2026-02-03 10:22:26 +0000 UTC" firstStartedPulling="2026-02-03 10:22:30.38547769 +0000 UTC m=+1220.541453819" lastFinishedPulling="2026-02-03 10:23:10.6098941 +0000 UTC m=+1260.765870229" observedRunningTime="2026-02-03 10:23:45.974416924 +0000 UTC m=+1296.130393063" watchObservedRunningTime="2026-02-03 10:23:45.977282597 +0000 UTC m=+1296.133258726" Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.390472 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.390815 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:23:46 crc kubenswrapper[5010]: I0203 10:23:46.523949 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e89174-6261-4cf0-9d5a-a750c362b79a" path="/var/lib/kubelet/pods/55e89174-6261-4cf0-9d5a-a750c362b79a/volumes" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.768653 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:23:49 crc kubenswrapper[5010]: E0203 10:23:49.769434 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.769450 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.769654 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c9ffaf-83e3-47c1-a1e8-b097b371ccec" containerName="swift-ring-rebalance" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.770352 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.772575 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.783723 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.916689 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:49 crc kubenswrapper[5010]: I0203 10:23:49.916840 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.018788 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.018983 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.019950 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.052633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"root-account-create-update-742kg\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " pod="openstack/root-account-create-update-742kg" Feb 03 10:23:50 crc kubenswrapper[5010]: I0203 10:23:50.134360 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:23:51 crc kubenswrapper[5010]: I0203 10:23:51.913160 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" containerName="ovn-controller" probeResult="failure" output=< Feb 03 10:23:51 crc kubenswrapper[5010]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 10:23:51 crc kubenswrapper[5010]: > Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.218554 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.221510 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-krnr5" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.456666 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.461910 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.469258 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.488410 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.494939 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495005 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495100 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495184 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495222 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.495256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597095 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597207 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597362 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597389 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597611 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597847 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.597958 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.598433 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.599633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.618349 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"ovn-controller-ql6ht-config-4w6d7\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:52 crc kubenswrapper[5010]: I0203 10:23:52.794624 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:23:56 crc kubenswrapper[5010]: I0203 10:23:56.980822 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ql6ht" podUID="1883c30e-4c38-468d-a5dc-91b07f167d67" containerName="ovn-controller" probeResult="failure" output=< Feb 03 10:23:56 crc kubenswrapper[5010]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 10:23:56 crc kubenswrapper[5010]: > Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.025179 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.035454 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b58c504-f707-43fe-91ca-4328c58e998c-etc-swift\") pod \"swift-storage-0\" (UID: \"4b58c504-f707-43fe-91ca-4328c58e998c\") " pod="openstack/swift-storage-0" Feb 03 10:23:57 crc kubenswrapper[5010]: I0203 10:23:57.132293 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 10:23:58 crc kubenswrapper[5010]: I0203 10:23:58.039555 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 03 10:23:58 crc kubenswrapper[5010]: I0203 10:23:58.619065 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.814908 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.815459 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqxvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xlhhb_openstack(a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:00 crc kubenswrapper[5010]: E0203 10:24:00.817173 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xlhhb" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.242866 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.320437 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:24:01 crc kubenswrapper[5010]: W0203 10:24:01.337600 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0efd6c3_d0dc_4ebc_a116_d7e811177fa6.slice/crio-c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545 WatchSource:0}: Error finding container c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545: Status 404 returned error can't find the container with id c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545 Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.399588 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerStarted","Data":"1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d"} Feb 03 10:24:01 crc kubenswrapper[5010]: E0203 10:24:01.401736 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xlhhb" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.571763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 10:24:01 crc kubenswrapper[5010]: W0203 10:24:01.581694 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b58c504_f707_43fe_91ca_4328c58e998c.slice/crio-0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f WatchSource:0}: Error finding container 0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f: Status 404 returned error can't find the container with id 0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f Feb 03 10:24:01 crc kubenswrapper[5010]: I0203 10:24:01.905884 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ql6ht" Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.407879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"0eaae01f4b96a18589a8eada604baf15f3cc9bacb179bf7002392b15b4613a7f"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.410021 5010 generic.go:334] "Generic (PLEG): container finished" podID="6401d284-126c-4b35-b668-35a8844eb9bb" containerID="ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a" exitCode=0 Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.410089 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerDied","Data":"ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411725 5010 generic.go:334] "Generic (PLEG): container finished" podID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerID="b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8" exitCode=0 Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411752 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerDied","Data":"b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8"} Feb 03 10:24:02 crc kubenswrapper[5010]: I0203 10:24:02.411766 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerStarted","Data":"c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.429459 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"db2a74b8f45f6c7de60dfd387527274c06d19c2dc0ac62cded7d6ed861fef928"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.430731 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"7c5201313cc638d3fde80ddc4c91f16178d4855a4de7218c1565d0b1a6a13512"} Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.740196 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.834996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835175 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835446 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835598 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.835666 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") pod \"6401d284-126c-4b35-b668-35a8844eb9bb\" (UID: \"6401d284-126c-4b35-b668-35a8844eb9bb\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837163 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837715 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837730 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run" (OuterVolumeSpecName: "var-run") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.837737 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.839492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts" (OuterVolumeSpecName: "scripts") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.840104 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp" (OuterVolumeSpecName: "kube-api-access-mj2cp") pod "6401d284-126c-4b35-b668-35a8844eb9bb" (UID: "6401d284-126c-4b35-b668-35a8844eb9bb"). InnerVolumeSpecName "kube-api-access-mj2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.848181 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937276 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") pod \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937368 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") pod \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\" (UID: \"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6\") " Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937597 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj2cp\" (UniqueName: \"kubernetes.io/projected/6401d284-126c-4b35-b668-35a8844eb9bb-kube-api-access-mj2cp\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937622 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937636 5010 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937644 5010 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937654 5010 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6401d284-126c-4b35-b668-35a8844eb9bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.937663 5010 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6401d284-126c-4b35-b668-35a8844eb9bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.939207 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" (UID: "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:03 crc kubenswrapper[5010]: I0203 10:24:03.944246 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5" (OuterVolumeSpecName: "kube-api-access-nw5c5") pod "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" (UID: "c0efd6c3-d0dc-4ebc-a116-d7e811177fa6"). InnerVolumeSpecName "kube-api-access-nw5c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.039957 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw5c5\" (UniqueName: \"kubernetes.io/projected/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-kube-api-access-nw5c5\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.040486 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444510 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ql6ht-config-4w6d7" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444502 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ql6ht-config-4w6d7" event={"ID":"6401d284-126c-4b35-b668-35a8844eb9bb","Type":"ContainerDied","Data":"1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.444692 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e83757b2e759c43060f4e53f21842ec4f1d15d13cbd2a72d2127f16f38ae78d" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446693 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-742kg" event={"ID":"c0efd6c3-d0dc-4ebc-a116-d7e811177fa6","Type":"ContainerDied","Data":"c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446716 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-742kg" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.446733 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c83431ad2e0e03f2949a3d629ee5e7c316fee3c8a2ec436126bdd8f80ca23545" Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.449411 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"d1c2a530c0466b671134916ca72597adfc90c967b55e06d9fba59851902ec967"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.449438 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"29effc9d44b4300198de5cfd88d55c8ad7bd542b084778e434bda412fc3f5c84"} Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.872619 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:04 crc kubenswrapper[5010]: I0203 10:24:04.892817 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ql6ht-config-4w6d7"] Feb 03 10:24:06 crc kubenswrapper[5010]: I0203 10:24:06.512833 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" path="/var/lib/kubelet/pods/6401d284-126c-4b35-b668-35a8844eb9bb/volumes" Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681351 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"0cffea7078f46c14a80ad94482e3f71482a844ae18e5b5ec841cd848d2fe8e71"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"16443360c283954226183f93ac04429762280bd4c2147613462cb311b6496193"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681653 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"21414971b15e50b37e5ebd3f2bdf70d9842887d2857e5857563802e5f1a3f07f"} Feb 03 10:24:07 crc kubenswrapper[5010]: I0203 10:24:07.681661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"1dedc0697d0e6e0ad551f52949751ad019da7d427b25b62e39ae6b61b076e0b7"} Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.361623 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.621462 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680266 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:08 crc kubenswrapper[5010]: E0203 10:24:08.680669 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680692 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: E0203 10:24:08.680727 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680735 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" containerName="mariadb-account-create-update" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.680920 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6401d284-126c-4b35-b668-35a8844eb9bb" containerName="ovn-config" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.681472 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.720353 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.789148 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.790530 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.802403 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.860872 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.861106 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.910277 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.912015 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:08 crc kubenswrapper[5010]: I0203 10:24:08.919259 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.026299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.026582 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027064 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027290 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.027732 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.029555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.078522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"barbican-db-create-z7nxm\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.079993 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.086004 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"cinder-db-create-54zjm\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.118590 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.121982 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.124527 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.129517 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.129613 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.157455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232276 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232379 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232449 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.232504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.234813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.242741 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.243900 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.256496 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.261910 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"cinder-5b83-account-create-update-hrlzs\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.277610 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.325822 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.335853 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.335963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.336032 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.336113 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.337375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.364311 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"neutron-db-create-5fk6k\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.437233 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.437324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.438377 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.544022 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.560796 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.605866 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"barbican-f06e-account-create-update-glqr6\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.607312 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.655774 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.657118 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.667072 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.715198 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.754050 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.755784 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.758997 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.759533 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.759810 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.762381 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.807091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856864 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856932 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.856964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857015 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857041 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.857965 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958762 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:09 crc kubenswrapper[5010]: I0203 10:24:09.958799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.039371 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.041247 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"neutron-5102-account-create-update-nv7jr\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.041251 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.050067 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"keystone-db-sync-b8wjx\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.119783 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.299133 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.449993 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.471291 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.640994 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.661591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.668423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.816291 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 10:24:10 crc kubenswrapper[5010]: I0203 10:24:10.838316 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.220607 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.281005 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:24:11 crc kubenswrapper[5010]: W0203 10:24:11.293512 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90501abd_ab27_4c54_bd38_239e5803689b.slice/crio-f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c WatchSource:0}: Error finding container f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c: Status 404 returned error can't find the container with id f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.331015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757007 5010 generic.go:334] "Generic (PLEG): container finished" podID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerID="175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757109 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerDied","Data":"175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.757142 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerStarted","Data":"af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.761164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerStarted","Data":"a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.783353 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"b836803d4779ec4a49b461a286a6d80e04b860b0b01da7dc4d4c40cfae68deeb"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.785708 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerStarted","Data":"5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.785758 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerStarted","Data":"c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.789992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerStarted","Data":"ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.790068 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerStarted","Data":"37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792279 5010 generic.go:334] "Generic (PLEG): container finished" podID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerID="5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerDied","Data":"5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.792332 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerStarted","Data":"6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794499 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c5b7adb-c7e4-4014-b37f-674861868979" containerID="6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f" exitCode=0 Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794647 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerDied","Data":"6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.794680 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerStarted","Data":"d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.796130 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerStarted","Data":"02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.796157 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerStarted","Data":"f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c"} Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.813542 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5b83-account-create-update-hrlzs" podStartSLOduration=3.81351046 podStartE2EDuration="3.81351046s" podCreationTimestamp="2026-02-03 10:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.811737865 +0000 UTC m=+1321.967713994" watchObservedRunningTime="2026-02-03 10:24:11.81351046 +0000 UTC m=+1321.969486589" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.831963 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5102-account-create-update-nv7jr" podStartSLOduration=2.831943515 podStartE2EDuration="2.831943515s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.826407783 +0000 UTC m=+1321.982383912" watchObservedRunningTime="2026-02-03 10:24:11.831943515 +0000 UTC m=+1321.987919644" Feb 03 10:24:11 crc kubenswrapper[5010]: I0203 10:24:11.899550 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-f06e-account-create-update-glqr6" podStartSLOduration=2.8995204980000002 podStartE2EDuration="2.899520498s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:11.87515808 +0000 UTC m=+1322.031134209" watchObservedRunningTime="2026-02-03 10:24:11.899520498 +0000 UTC m=+1322.055496627" Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.810722 5010 generic.go:334] "Generic (PLEG): container finished" podID="fce7685e-8301-4c02-8e1b-386646d84264" containerID="5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.813070 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerDied","Data":"5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.815825 5010 generic.go:334] "Generic (PLEG): container finished" podID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerID="ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.815872 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerDied","Data":"ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"d1ce747a50d5f46d5b3c16c92f2b4f8b9e4ff276e546b1973d05421fc0f0d97e"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845521 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"b3d355cfd98d32954b37cf45219a4be1c32cf5b14c94e1a52df20c6c96e39cbd"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.845532 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"80f81f8a1df2e968b1e4c71e1d5878acda4986e765bee0232e1d5fee79af9d39"} Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.848163 5010 generic.go:334] "Generic (PLEG): container finished" podID="90501abd-ab27-4c54-bd38-239e5803689b" containerID="02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c" exitCode=0 Feb 03 10:24:12 crc kubenswrapper[5010]: I0203 10:24:12.848362 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerDied","Data":"02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c"} Feb 03 10:24:13 crc kubenswrapper[5010]: I0203 10:24:13.908821 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"a932203b34d249fb2ffede1ec05b784720503da7b29b24c7ca9666515aa4cf12"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.822112 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.842446 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.845429 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.858472 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") pod \"83561b9b-ec1d-4ef5-bb05-48780834e40d\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.858556 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") pod \"83561b9b-ec1d-4ef5-bb05-48780834e40d\" (UID: \"83561b9b-ec1d-4ef5-bb05-48780834e40d\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.859926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83561b9b-ec1d-4ef5-bb05-48780834e40d" (UID: "83561b9b-ec1d-4ef5-bb05-48780834e40d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.874431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg" (OuterVolumeSpecName: "kube-api-access-smzrg") pod "83561b9b-ec1d-4ef5-bb05-48780834e40d" (UID: "83561b9b-ec1d-4ef5-bb05-48780834e40d"). InnerVolumeSpecName "kube-api-access-smzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940081 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-z7nxm" event={"ID":"1c5b7adb-c7e4-4014-b37f-674861868979","Type":"ContainerDied","Data":"d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940454 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d25b0088e1755bc09fffce4f9f6579141c9392fb4e69275100ea890163ce1c0f" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.940560 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-z7nxm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.952124 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.958642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"cf26953bc1f2dd09b88d82a0c5f1103a17f6a80dcfb8303aa71f48cf4e96c654"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960158 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") pod \"1c5b7adb-c7e4-4014-b37f-674861868979\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960245 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") pod \"9c0e1d98-9045-4a70-8021-ac7dcf843775\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960281 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") pod \"9c0e1d98-9045-4a70-8021-ac7dcf843775\" (UID: \"9c0e1d98-9045-4a70-8021-ac7dcf843775\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960503 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") pod \"1c5b7adb-c7e4-4014-b37f-674861868979\" (UID: \"1c5b7adb-c7e4-4014-b37f-674861868979\") " Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960938 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83561b9b-ec1d-4ef5-bb05-48780834e40d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.960967 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smzrg\" (UniqueName: \"kubernetes.io/projected/83561b9b-ec1d-4ef5-bb05-48780834e40d-kube-api-access-smzrg\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.961018 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c0e1d98-9045-4a70-8021-ac7dcf843775" (UID: "9c0e1d98-9045-4a70-8021-ac7dcf843775"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.961634 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c5b7adb-c7e4-4014-b37f-674861868979" (UID: "1c5b7adb-c7e4-4014-b37f-674861868979"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5fk6k" event={"ID":"83561b9b-ec1d-4ef5-bb05-48780834e40d","Type":"ContainerDied","Data":"af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963224 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af80050199b9095462b302599b666bc9450b38b9838b7bf7e684a30e30caf772" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.963230 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5fk6k" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.973910 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp" (OuterVolumeSpecName: "kube-api-access-hd6dp") pod "1c5b7adb-c7e4-4014-b37f-674861868979" (UID: "1c5b7adb-c7e4-4014-b37f-674861868979"). InnerVolumeSpecName "kube-api-access-hd6dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976866 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-54zjm" event={"ID":"9c0e1d98-9045-4a70-8021-ac7dcf843775","Type":"ContainerDied","Data":"6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a"} Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976914 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d47e0b433b06ac06d298258d90d3d0668c8ed77604ca3eaf431f6b0a84e592a" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.976989 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-54zjm" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.980461 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg" (OuterVolumeSpecName: "kube-api-access-8tmgg") pod "9c0e1d98-9045-4a70-8021-ac7dcf843775" (UID: "9c0e1d98-9045-4a70-8021-ac7dcf843775"). InnerVolumeSpecName "kube-api-access-8tmgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.984964 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:14 crc kubenswrapper[5010]: I0203 10:24:14.999471 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062157 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") pod \"fce7685e-8301-4c02-8e1b-386646d84264\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062409 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") pod \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") pod \"90501abd-ab27-4c54-bd38-239e5803689b\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062601 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") pod \"fce7685e-8301-4c02-8e1b-386646d84264\" (UID: \"fce7685e-8301-4c02-8e1b-386646d84264\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062671 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") pod \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\" (UID: \"8144e4b8-89a7-4c08-86b9-219ea9d4645c\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.062753 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") pod \"90501abd-ab27-4c54-bd38-239e5803689b\" (UID: \"90501abd-ab27-4c54-bd38-239e5803689b\") " Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063193 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6dp\" (UniqueName: \"kubernetes.io/projected/1c5b7adb-c7e4-4014-b37f-674861868979-kube-api-access-hd6dp\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063231 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c5b7adb-c7e4-4014-b37f-674861868979-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063245 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c0e1d98-9045-4a70-8021-ac7dcf843775-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063256 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmgg\" (UniqueName: \"kubernetes.io/projected/9c0e1d98-9045-4a70-8021-ac7dcf843775-kube-api-access-8tmgg\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063328 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90501abd-ab27-4c54-bd38-239e5803689b" (UID: "90501abd-ab27-4c54-bd38-239e5803689b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.063947 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8144e4b8-89a7-4c08-86b9-219ea9d4645c" (UID: "8144e4b8-89a7-4c08-86b9-219ea9d4645c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.064462 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fce7685e-8301-4c02-8e1b-386646d84264" (UID: "fce7685e-8301-4c02-8e1b-386646d84264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2" (OuterVolumeSpecName: "kube-api-access-tpsd2") pod "8144e4b8-89a7-4c08-86b9-219ea9d4645c" (UID: "8144e4b8-89a7-4c08-86b9-219ea9d4645c"). InnerVolumeSpecName "kube-api-access-tpsd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072605 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x" (OuterVolumeSpecName: "kube-api-access-m5p4x") pod "fce7685e-8301-4c02-8e1b-386646d84264" (UID: "fce7685e-8301-4c02-8e1b-386646d84264"). InnerVolumeSpecName "kube-api-access-m5p4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.072693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw" (OuterVolumeSpecName: "kube-api-access-xnqgw") pod "90501abd-ab27-4c54-bd38-239e5803689b" (UID: "90501abd-ab27-4c54-bd38-239e5803689b"). InnerVolumeSpecName "kube-api-access-xnqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164856 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnqgw\" (UniqueName: \"kubernetes.io/projected/90501abd-ab27-4c54-bd38-239e5803689b-kube-api-access-xnqgw\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164901 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fce7685e-8301-4c02-8e1b-386646d84264-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164914 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8144e4b8-89a7-4c08-86b9-219ea9d4645c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164925 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90501abd-ab27-4c54-bd38-239e5803689b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5p4x\" (UniqueName: \"kubernetes.io/projected/fce7685e-8301-4c02-8e1b-386646d84264-kube-api-access-m5p4x\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:15 crc kubenswrapper[5010]: I0203 10:24:15.164945 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpsd2\" (UniqueName: \"kubernetes.io/projected/8144e4b8-89a7-4c08-86b9-219ea9d4645c-kube-api-access-tpsd2\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.017570 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f06e-account-create-update-glqr6" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.017558 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f06e-account-create-update-glqr6" event={"ID":"8144e4b8-89a7-4c08-86b9-219ea9d4645c","Type":"ContainerDied","Data":"37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.018247 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d7165e050b73a9b2db161747cacdeca84e8090a079b1b3dce24ed46e010bb6" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.061849 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b58c504-f707-43fe-91ca-4328c58e998c","Type":"ContainerStarted","Data":"2af5bfc5fbc2a5eac12a440d02e92d50b536160904f5367797a5fcf2fcc9b3bc"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067555 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5102-account-create-update-nv7jr" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067570 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5102-account-create-update-nv7jr" event={"ID":"90501abd-ab27-4c54-bd38-239e5803689b","Type":"ContainerDied","Data":"f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.067621 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8485215bb4e69bf51b493e589891df592e5976041e72593d4f67139fa1b872c" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070490 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5b83-account-create-update-hrlzs" event={"ID":"fce7685e-8301-4c02-8e1b-386646d84264","Type":"ContainerDied","Data":"c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43"} Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070539 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13284cefec97b3f12efa97f85dd824080363feefcefccda188b362f41c20f43" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.070585 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5b83-account-create-update-hrlzs" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.102083 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.800691703 podStartE2EDuration="53.102058186s" podCreationTimestamp="2026-02-03 10:23:23 +0000 UTC" firstStartedPulling="2026-02-03 10:24:01.585139949 +0000 UTC m=+1311.741116078" lastFinishedPulling="2026-02-03 10:24:10.886506432 +0000 UTC m=+1321.042482561" observedRunningTime="2026-02-03 10:24:16.097568101 +0000 UTC m=+1326.253544240" watchObservedRunningTime="2026-02-03 10:24:16.102058186 +0000 UTC m=+1326.258034315" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.392520 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.392582 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.449674 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450179 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450209 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450262 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450270 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450283 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450291 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450321 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450327 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450349 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450355 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: E0203 10:24:16.450372 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450378 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450573 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450585 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="90501abd-ab27-4c54-bd38-239e5803689b" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450596 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="fce7685e-8301-4c02-8e1b-386646d84264" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450614 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450628 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" containerName="mariadb-account-create-update" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.450644 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" containerName="mariadb-database-create" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.451838 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.455795 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.465906 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502332 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502378 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502495 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502522 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.502552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605204 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605280 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605310 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605376 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605450 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.605473 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.606540 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.606577 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.607259 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.608017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.608068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.633093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"dnsmasq-dns-764c5664d7-tpx4x\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:16 crc kubenswrapper[5010]: I0203 10:24:16.781191 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:20 crc kubenswrapper[5010]: W0203 10:24:20.281882 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb55fd4_6f97_47c3_bd98_89ca6331cf88.slice/crio-93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331 WatchSource:0}: Error finding container 93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331: Status 404 returned error can't find the container with id 93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331 Feb 03 10:24:20 crc kubenswrapper[5010]: I0203 10:24:20.317228 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.225559 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerStarted","Data":"3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234172 5010 generic.go:334] "Generic (PLEG): container finished" podID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerID="9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19" exitCode=0 Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.234290 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerStarted","Data":"93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331"} Feb 03 10:24:21 crc kubenswrapper[5010]: I0203 10:24:21.258010 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-b8wjx" podStartSLOduration=3.288603048 podStartE2EDuration="12.257986783s" podCreationTimestamp="2026-02-03 10:24:09 +0000 UTC" firstStartedPulling="2026-02-03 10:24:11.33018323 +0000 UTC m=+1321.486159359" lastFinishedPulling="2026-02-03 10:24:20.299566965 +0000 UTC m=+1330.455543094" observedRunningTime="2026-02-03 10:24:21.249165596 +0000 UTC m=+1331.405141725" watchObservedRunningTime="2026-02-03 10:24:21.257986783 +0000 UTC m=+1331.413962912" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.470609 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerStarted","Data":"c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e"} Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.472725 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.475725 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerStarted","Data":"c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c"} Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.500411 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podStartSLOduration=6.500386571 podStartE2EDuration="6.500386571s" podCreationTimestamp="2026-02-03 10:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:22.493066372 +0000 UTC m=+1332.649042511" watchObservedRunningTime="2026-02-03 10:24:22.500386571 +0000 UTC m=+1332.656362700" Feb 03 10:24:22 crc kubenswrapper[5010]: I0203 10:24:22.527578 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xlhhb" podStartSLOduration=3.8322007080000002 podStartE2EDuration="41.527541231s" podCreationTimestamp="2026-02-03 10:23:41 +0000 UTC" firstStartedPulling="2026-02-03 10:23:42.611153141 +0000 UTC m=+1292.767129260" lastFinishedPulling="2026-02-03 10:24:20.306493654 +0000 UTC m=+1330.462469783" observedRunningTime="2026-02-03 10:24:22.516856575 +0000 UTC m=+1332.672832704" watchObservedRunningTime="2026-02-03 10:24:22.527541231 +0000 UTC m=+1332.683517380" Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.784455 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.862588 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:26 crc kubenswrapper[5010]: I0203 10:24:26.862959 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-c5kgf" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" containerID="cri-o://f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" gracePeriod=10 Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.384292 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.502975 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503066 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.503326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") pod \"44cce4a6-14dd-4b2d-9473-49edee803476\" (UID: \"44cce4a6-14dd-4b2d-9473-49edee803476\") " Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.512311 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8" (OuterVolumeSpecName: "kube-api-access-s5td8") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "kube-api-access-s5td8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.561661 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.565502 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.574651 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config" (OuterVolumeSpecName: "config") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.588054 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44cce4a6-14dd-4b2d-9473-49edee803476" (UID: "44cce4a6-14dd-4b2d-9473-49edee803476"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608061 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608099 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5td8\" (UniqueName: \"kubernetes.io/projected/44cce4a6-14dd-4b2d-9473-49edee803476-kube-api-access-s5td8\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608112 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608138 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:27 crc kubenswrapper[5010]: I0203 10:24:27.608147 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44cce4a6-14dd-4b2d-9473-49edee803476-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.005975 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006067 5010 scope.go:117] "RemoveContainer" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006059 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-c5kgf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.005911 5010 generic.go:334] "Generic (PLEG): container finished" podID="44cce4a6-14dd-4b2d-9473-49edee803476" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" exitCode=0 Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.006309 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-c5kgf" event={"ID":"44cce4a6-14dd-4b2d-9473-49edee803476","Type":"ContainerDied","Data":"7b4cc9746175c611db5edf3a8b25a3610c6d4de7b21e5812358190938f2ecfc7"} Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.050091 5010 scope.go:117] "RemoveContainer" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.073206 5010 scope.go:117] "RemoveContainer" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: E0203 10:24:28.074371 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": container with ID starting with f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98 not found: ID does not exist" containerID="f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074426 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98"} err="failed to get container status \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": rpc error: code = NotFound desc = could not find container \"f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98\": container with ID starting with f721b9cd727296728922ad3a89a7794ce345ff67be5a73e4e4a4dbf2226f6f98 not found: ID does not exist" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074453 5010 scope.go:117] "RemoveContainer" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.074574 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:28 crc kubenswrapper[5010]: E0203 10:24:28.075119 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": container with ID starting with 3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf not found: ID does not exist" containerID="3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.075153 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf"} err="failed to get container status \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": rpc error: code = NotFound desc = could not find container \"3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf\": container with ID starting with 3c57d1f02480e226663bd51d322aaf3512d8cb461ee5df04050137b40a4bc8cf not found: ID does not exist" Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.098875 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-c5kgf"] Feb 03 10:24:28 crc kubenswrapper[5010]: I0203 10:24:28.516134 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" path="/var/lib/kubelet/pods/44cce4a6-14dd-4b2d-9473-49edee803476/volumes" Feb 03 10:24:30 crc kubenswrapper[5010]: I0203 10:24:30.030279 5010 generic.go:334] "Generic (PLEG): container finished" podID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerID="3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906" exitCode=0 Feb 03 10:24:30 crc kubenswrapper[5010]: I0203 10:24:30.030327 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerDied","Data":"3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906"} Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.416036 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.490166 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.490552 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.491683 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") pod \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\" (UID: \"a81f0078-44e5-4bbc-82ce-3d648e2e32db\") " Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.506930 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76" (OuterVolumeSpecName: "kube-api-access-grp76") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "kube-api-access-grp76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.532684 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.561367 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data" (OuterVolumeSpecName: "config-data") pod "a81f0078-44e5-4bbc-82ce-3d648e2e32db" (UID: "a81f0078-44e5-4bbc-82ce-3d648e2e32db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.596986 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.597048 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grp76\" (UniqueName: \"kubernetes.io/projected/a81f0078-44e5-4bbc-82ce-3d648e2e32db-kube-api-access-grp76\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:31 crc kubenswrapper[5010]: I0203 10:24:31.597067 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81f0078-44e5-4bbc-82ce-3d648e2e32db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8wjx" event={"ID":"a81f0078-44e5-4bbc-82ce-3d648e2e32db","Type":"ContainerDied","Data":"a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5"} Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056094 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b60789589a796270441190392ade515cdcca0df1868691375db1fd1edbc5e5" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.056170 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8wjx" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.494727 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499024 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499173 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499288 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="init" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499305 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="init" Feb 03 10:24:32 crc kubenswrapper[5010]: E0203 10:24:32.499901 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.499915 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.500263 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="44cce4a6-14dd-4b2d-9473-49edee803476" containerName="dnsmasq-dns" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.500287 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" containerName="keystone-db-sync" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.509491 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.572508 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573042 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573277 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573052 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.573639 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.579276 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.587923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.594330 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.620603 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632619 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632695 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632744 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632779 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632807 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.632863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.736986 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737467 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737847 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.737949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738533 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738610 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738692 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738765 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.738828 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.748614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.750101 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.762405 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.751629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.753450 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.779690 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.781743 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.792762 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.793053 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.793555 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"keystone-bootstrap-7w6tr\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.805731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j789z" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.838927 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841079 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841147 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841179 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841226 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.841314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.842535 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.843737 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.844608 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.845231 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.845835 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.862318 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.864615 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874065 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9bhsm" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874372 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874544 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.874697 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.875304 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.905767 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.947927 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.948025 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.948173 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.958333 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.961379 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.968096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"dnsmasq-dns-5959f8865f-gpttb\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.980734 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.991469 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.993063 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:24:32 crc kubenswrapper[5010]: I0203 10:24:32.995873 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.007112 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.012815 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.013164 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gk5q6" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.013326 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.041799 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.044106 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.050990 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j94mw" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.051457 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052785 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052834 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052934 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.052973 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053048 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.053170 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.066609 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.083902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.104474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.135349 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.152902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"neutron-db-sync-mvrf4\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155709 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155841 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.155987 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156028 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156063 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156101 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156188 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156242 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156284 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156341 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156371 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156455 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156522 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156553 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156617 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.156643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.159075 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.160093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.163550 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.164505 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.169632 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.220768 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"horizon-57c9d98597-wmwqg\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.244227 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259164 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259293 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259334 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259367 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259399 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259443 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259466 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259494 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259511 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259538 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259748 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259789 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259831 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259850 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259870 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.259930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.261002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.265465 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.270851 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.274425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.275444 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.279701 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.280671 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.281145 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.281226 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.282875 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.286639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.287099 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.289570 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.291539 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.302897 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.303792 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.304432 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.304568 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtdfs" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.321343 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.346763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362734 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.362979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464554 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464653 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.464918 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.466010 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.470731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.473915 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.478135 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.509475 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.755153 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"barbican-db-sync-g6tdx\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.765018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"cinder-db-sync-b9wwp\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.765563 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.792003 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"ceilometer-0\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.797287 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"placement-db-sync-tptfc\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.800581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:24:33 crc kubenswrapper[5010]: I0203 10:24:33.867260 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.593596 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.597652 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.616081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.818943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819437 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819519 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.819626 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.820492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.820647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.856770 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.856799 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858062 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858083 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.858161 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927463 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927787 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.927949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929156 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.929543 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:34 crc kubenswrapper[5010]: I0203 10:24:34.949985 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"dnsmasq-dns-58dd9ff6bc-r249m\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.031837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032464 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032591 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032646 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.032723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136631 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136814 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136862 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.136908 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.138830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.139820 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.141171 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.166485 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.191546 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.192652 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"horizon-6548998769-npmxc\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.484626 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:24:35 crc kubenswrapper[5010]: I0203 10:24:35.960762 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerStarted","Data":"2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0"} Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.171684 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.231270 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.234812 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.278742 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316779 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316924 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.316979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421236 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421300 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421352 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421429 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.421487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.422608 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.423205 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.445146 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.464639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.468067 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"horizon-5b5b4c5ff-x859r\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.490763 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.542091 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.552177 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.568939 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:24:36 crc kubenswrapper[5010]: W0203 10:24:36.578458 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f771bc6_23e3_4382_89ea_f773805f789c.slice/crio-96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8 WatchSource:0}: Error finding container 96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8: Status 404 returned error can't find the container with id 96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8 Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.600451 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.607604 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:24:36 crc kubenswrapper[5010]: I0203 10:24:36.697269 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:36 crc kubenswrapper[5010]: W0203 10:24:36.758205 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c75dd5e_8b56_4dc0_8e80_a6df3ec9a7ba.slice/crio-d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2 WatchSource:0}: Error finding container d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2: Status 404 returned error can't find the container with id d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2 Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.115031 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.138911 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c9d98597-wmwqg" event={"ID":"7f771bc6-23e3-4382-89ea-f773805f789c","Type":"ContainerStarted","Data":"96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.140325 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.152197 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerStarted","Data":"d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.167322 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerStarted","Data":"359ae3ad38c8aceae2d332d6b3825bb94840bbf169efcda9149246e76b81e498"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.176026 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.178317 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerStarted","Data":"2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4"} Feb 03 10:24:37 crc kubenswrapper[5010]: W0203 10:24:37.189785 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7535aa4_5a5e_4663_b9c5_7822d0836660.slice/crio-9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902 WatchSource:0}: Error finding container 9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902: Status 404 returned error can't find the container with id 9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902 Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.194688 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"61a59197d7bdf8ea63d4d37b8f71bb48f78f9037194046295bca9711dd2a3194"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.194808 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.227626 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mvrf4" podStartSLOduration=5.227591887 podStartE2EDuration="5.227591887s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:37.224409535 +0000 UTC m=+1347.380385664" watchObservedRunningTime="2026-02-03 10:24:37.227591887 +0000 UTC m=+1347.383568016" Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.232551 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerStarted","Data":"a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e"} Feb 03 10:24:37 crc kubenswrapper[5010]: I0203 10:24:37.391913 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.311358 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerStarted","Data":"8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.318646 5010 generic.go:334] "Generic (PLEG): container finished" podID="378ea53a-1006-4116-a56d-7c466c494224" containerID="00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327" exitCode=0 Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.318834 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerDied","Data":"00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.323522 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"2db889447ff0bc0e6f1ca25bbfa660b5dc01678a634757b799ec80a5560e67e4"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.330914 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6548998769-npmxc" event={"ID":"2f7faa93-7520-4d4b-b153-ed311effd90b","Type":"ContainerStarted","Data":"b292b07f4a535a045b80c60269a48c9544e180d091d0068c00e312baf2b8ddb0"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.340072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerStarted","Data":"dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356133 5010 generic.go:334] "Generic (PLEG): container finished" podID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerID="86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7" exitCode=0 Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356294 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.356335 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerStarted","Data":"9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.366651 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerStarted","Data":"284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765"} Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.422413 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7w6tr" podStartSLOduration=6.422376298 podStartE2EDuration="6.422376298s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:38.413486109 +0000 UTC m=+1348.569462238" watchObservedRunningTime="2026-02-03 10:24:38.422376298 +0000 UTC m=+1348.578352427" Feb 03 10:24:38 crc kubenswrapper[5010]: I0203 10:24:38.913978 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054654 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054717 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054891 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.054937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.055034 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.055077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") pod \"378ea53a-1006-4116-a56d-7c466c494224\" (UID: \"378ea53a-1006-4116-a56d-7c466c494224\") " Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.081492 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l" (OuterVolumeSpecName: "kube-api-access-9rz9l") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "kube-api-access-9rz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.094912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.102907 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.119739 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.124871 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config" (OuterVolumeSpecName: "config") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157656 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rz9l\" (UniqueName: \"kubernetes.io/projected/378ea53a-1006-4116-a56d-7c466c494224-kube-api-access-9rz9l\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157701 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157716 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157727 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.157740 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.177404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "378ea53a-1006-4116-a56d-7c466c494224" (UID: "378ea53a-1006-4116-a56d-7c466c494224"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.260646 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/378ea53a-1006-4116-a56d-7c466c494224-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.390748 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" event={"ID":"378ea53a-1006-4116-a56d-7c466c494224","Type":"ContainerDied","Data":"359ae3ad38c8aceae2d332d6b3825bb94840bbf169efcda9149246e76b81e498"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.390839 5010 scope.go:117] "RemoveContainer" containerID="00e55dbee70f472f8a93914d11cda4d852198236db1abda35bbcb237004b7327" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.391034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gpttb" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.419762 5010 generic.go:334] "Generic (PLEG): container finished" podID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerID="c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c" exitCode=0 Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.419926 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerDied","Data":"c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.444732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerStarted","Data":"54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37"} Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.445394 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.518407 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podStartSLOduration=6.518365582 podStartE2EDuration="6.518365582s" podCreationTimestamp="2026-02-03 10:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:24:39.49385574 +0000 UTC m=+1349.649831869" watchObservedRunningTime="2026-02-03 10:24:39.518365582 +0000 UTC m=+1349.674341711" Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.615907 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:39 crc kubenswrapper[5010]: I0203 10:24:39.630404 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gpttb"] Feb 03 10:24:40 crc kubenswrapper[5010]: I0203 10:24:40.524442 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378ea53a-1006-4116-a56d-7c466c494224" path="/var/lib/kubelet/pods/378ea53a-1006-4116-a56d-7c466c494224/volumes" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.156683 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.237830 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.237985 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.238113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.238164 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") pod \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\" (UID: \"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3\") " Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.277586 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx" (OuterVolumeSpecName: "kube-api-access-nqxvx") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "kube-api-access-nqxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.287491 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.341133 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.341195 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxvx\" (UniqueName: \"kubernetes.io/projected/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-kube-api-access-nqxvx\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.387202 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.393188 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data" (OuterVolumeSpecName: "config-data") pod "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" (UID: "a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.443012 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.443378 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.547972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xlhhb" event={"ID":"a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3","Type":"ContainerDied","Data":"46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699"} Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.548100 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xlhhb" Feb 03 10:24:41 crc kubenswrapper[5010]: I0203 10:24:41.548644 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46779b8951b31f9858ffd66ac6e32f691ea2a94f077b82226673a024b7efc699" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.125952 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.126491 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" containerID="cri-o://54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" gracePeriod=10 Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.186936 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:24:42 crc kubenswrapper[5010]: E0203 10:24:42.193041 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: E0203 10:24:42.193175 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193191 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193695 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="378ea53a-1006-4116-a56d-7c466c494224" containerName="init" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.193726 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" containerName="glance-db-sync" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.201936 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.281784 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295002 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.295809 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.296509 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.370020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.401799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.402884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404444 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404594 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404626 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.404666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.409451 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.411639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.412504 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.413880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.414633 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.446589 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.454916 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.464358 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512372 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512461 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512497 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512542 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.512562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.548766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"dnsmasq-dns-785d8bcb8c-4g4n5\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.578114 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.578181 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.582697 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.584993 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.594041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.594438 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.595177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.597464 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.613725 5010 generic.go:334] "Generic (PLEG): container finished" podID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerID="54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" exitCode=0 Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.613814 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37"} Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614839 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.614943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615164 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615241 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615459 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615529 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615719 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615742 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615809 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.615833 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.620717 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.623170 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.623235 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.625630 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.642047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.642702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.643578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.654383 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"horizon-7cdcd56868-k9h7g\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.665237 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.667113 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.708106 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718756 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718844 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.718902 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719067 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719092 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719124 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719158 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719180 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719280 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.719384 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.721578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.721944 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.722689 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.725840 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.738759 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.740125 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.742881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.790250 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.802739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.803281 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823740 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823891 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823962 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.823991 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.824196 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.825266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fedcc57-b16c-4177-a10e-f627269b4adb-logs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.826561 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-scripts\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.828459 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fedcc57-b16c-4177-a10e-f627269b4adb-config-data\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.833093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-secret-key\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.833378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-combined-ca-bundle\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.839583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fedcc57-b16c-4177-a10e-f627269b4adb-horizon-tls-certs\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:42 crc kubenswrapper[5010]: I0203 10:24:42.847069 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6scb\" (UniqueName: \"kubernetes.io/projected/2fedcc57-b16c-4177-a10e-f627269b4adb-kube-api-access-t6scb\") pod \"horizon-6cc988db4-2mpfb\" (UID: \"2fedcc57-b16c-4177-a10e-f627269b4adb\") " pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.123019 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.562360 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.564583 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.578043 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.586801 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652089 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652195 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652309 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652419 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652784 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.652867 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754359 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754434 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754541 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.754648 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755127 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755833 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.755897 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.770299 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.771026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.775158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.777739 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.872049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:24:43 crc kubenswrapper[5010]: I0203 10:24:43.897091 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.171971 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.693339 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerID="284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765" exitCode=0 Feb 03 10:24:45 crc kubenswrapper[5010]: I0203 10:24:45.693425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerDied","Data":"284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765"} Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390293 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390370 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.390424 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.391495 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.391569 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" gracePeriod=600 Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.500078 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.587849 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707602 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" exitCode=0 Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707839 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88"} Feb 03 10:24:46 crc kubenswrapper[5010]: I0203 10:24:46.707878 5010 scope.go:117] "RemoveContainer" containerID="221f195b125299df734f26b3fd40fd966d81cfff3c339b70c815feda6a5e1f4b" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.574772 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625408 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625582 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625636 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625760 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625868 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.625948 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") pod \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\" (UID: \"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba\") " Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.634553 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts" (OuterVolumeSpecName: "scripts") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.634588 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.638168 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7" (OuterVolumeSpecName: "kube-api-access-lw5x7") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "kube-api-access-lw5x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.649302 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.655690 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data" (OuterVolumeSpecName: "config-data") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.668692 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" (UID: "1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.728736 5010 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.728990 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729059 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729116 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729172 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.729256 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw5x7\" (UniqueName: \"kubernetes.io/projected/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba-kube-api-access-lw5x7\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.793761 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7w6tr" event={"ID":"1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba","Type":"ContainerDied","Data":"d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2"} Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.794132 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dbdaf7c4fb793e606130a48124449992f37d61583b140dcfaf7dbb8bb3f1d2" Feb 03 10:24:53 crc kubenswrapper[5010]: I0203 10:24:53.793798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7w6tr" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.677766 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.691159 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7w6tr"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.774501 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:24:54 crc kubenswrapper[5010]: E0203 10:24:54.775289 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.775308 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.775520 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" containerName="keystone-bootstrap" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.776230 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779515 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779525 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779696 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xdhtt" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.779830 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.780092 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.793480 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847468 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847579 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847648 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847750 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.847774 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958504 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.958562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.962941 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.963329 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.964492 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.976230 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.978735 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:54 crc kubenswrapper[5010]: I0203 10:24:54.981499 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"keystone-bootstrap-swx9t\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:55 crc kubenswrapper[5010]: I0203 10:24:55.097002 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:24:55 crc kubenswrapper[5010]: I0203 10:24:55.168114 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.745428 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.745615 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcm2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tptfc_openstack(29ef610c-3c09-4b27-9b97-3a5350388caa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.747605 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tptfc" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.784120 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.784384 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb7h566h8bh56bh5d8h594h5bh58fh4h5b8h8dh9h6dhb6h98h5fdh8chb6hdch688h5b6h5c7hcbh5f6h64fhd5h5f7h686h4h59hcfh597q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxtzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57c9d98597-wmwqg_openstack(7f771bc6-23e3-4382-89ea-f773805f789c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.787255 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-57c9d98597-wmwqg" podUID="7f771bc6-23e3-4382-89ea-f773805f789c" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.796519 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.796748 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n696h5ddh57dh5fbhfchc7h685h57h66h66ch5bdh698h65bh5c8h5bdh56h597h697h654h66fhb4h557h6fh575h57ch56fhfh594h6fh8ch65h587q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr86z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6548998769-npmxc_openstack(2f7faa93-7520-4d4b-b153-ed311effd90b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.799782 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6548998769-npmxc" podUID="2f7faa93-7520-4d4b-b153-ed311effd90b" Feb 03 10:24:55 crc kubenswrapper[5010]: E0203 10:24:55.819473 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tptfc" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" Feb 03 10:24:56 crc kubenswrapper[5010]: I0203 10:24:56.516320 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba" path="/var/lib/kubelet/pods/1c75dd5e-8b56-4dc0-8e80-a6df3ec9a7ba/volumes" Feb 03 10:24:58 crc kubenswrapper[5010]: E0203 10:24:58.145541 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 03 10:24:58 crc kubenswrapper[5010]: E0203 10:24:58.146073 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694h95h78h5d4h558h554h7ch96h589h5ddh545hbch57fh5f7hdfhc6h656h5f8h8fh658h68bh589h5c9h4h577h5cbh5cfh5fh545h68h66bh59q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rmrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4338eb03-3ad6-4d68-8d8a-a37694aff6d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.238345 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419200 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419294 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419360 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419394 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419417 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.419526 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") pod \"f7535aa4-5a5e-4663-b9c5-7822d0836660\" (UID: \"f7535aa4-5a5e-4663-b9c5-7822d0836660\") " Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.429666 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq" (OuterVolumeSpecName: "kube-api-access-4hhmq") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "kube-api-access-4hhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469111 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469780 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.469906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.472778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config" (OuterVolumeSpecName: "config") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.497052 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7535aa4-5a5e-4663-b9c5-7822d0836660" (UID: "f7535aa4-5a5e-4663-b9c5-7822d0836660"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521605 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521642 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521653 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521662 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521675 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7535aa4-5a5e-4663-b9c5-7822d0836660-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.521687 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hhmq\" (UniqueName: \"kubernetes.io/projected/f7535aa4-5a5e-4663-b9c5-7822d0836660-kube-api-access-4hhmq\") on node \"crc\" DevicePath \"\"" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.861627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" event={"ID":"f7535aa4-5a5e-4663-b9c5-7822d0836660","Type":"ContainerDied","Data":"9c2ae9a172420144ce552f204613ad111ecce479d2e000586e38710bc90ab902"} Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.861694 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.888173 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:24:58 crc kubenswrapper[5010]: I0203 10:24:58.898452 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-r249m"] Feb 03 10:25:00 crc kubenswrapper[5010]: I0203 10:25:00.168447 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-r249m" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Feb 03 10:25:00 crc kubenswrapper[5010]: I0203 10:25:00.513645 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" path="/var/lib/kubelet/pods/f7535aa4-5a5e-4663-b9c5-7822d0836660/volumes" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.762029 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.772669 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892578 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892620 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892669 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892696 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892794 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892881 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") pod \"2f7faa93-7520-4d4b-b153-ed311effd90b\" (UID: \"2f7faa93-7520-4d4b-b153-ed311effd90b\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892950 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892977 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.892998 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") pod \"7f771bc6-23e3-4382-89ea-f773805f789c\" (UID: \"7f771bc6-23e3-4382-89ea-f773805f789c\") " Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.893710 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts" (OuterVolumeSpecName: "scripts") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.893932 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data" (OuterVolumeSpecName: "config-data") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894392 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs" (OuterVolumeSpecName: "logs") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894438 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts" (OuterVolumeSpecName: "scripts") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs" (OuterVolumeSpecName: "logs") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.894925 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data" (OuterVolumeSpecName: "config-data") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.901409 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.902086 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm" (OuterVolumeSpecName: "kube-api-access-qxtzm") pod "7f771bc6-23e3-4382-89ea-f773805f789c" (UID: "7f771bc6-23e3-4382-89ea-f773805f789c"). InnerVolumeSpecName "kube-api-access-qxtzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.902166 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.904620 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z" (OuterVolumeSpecName: "kube-api-access-cr86z") pod "2f7faa93-7520-4d4b-b153-ed311effd90b" (UID: "2f7faa93-7520-4d4b-b153-ed311effd90b"). InnerVolumeSpecName "kube-api-access-cr86z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.932929 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6548998769-npmxc" event={"ID":"2f7faa93-7520-4d4b-b153-ed311effd90b","Type":"ContainerDied","Data":"b292b07f4a535a045b80c60269a48c9544e180d091d0068c00e312baf2b8ddb0"} Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.933017 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6548998769-npmxc" Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.938997 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57c9d98597-wmwqg" event={"ID":"7f771bc6-23e3-4382-89ea-f773805f789c","Type":"ContainerDied","Data":"96801178c0f60b1be70f5a00384d47d9cf626976ce906ad24548febe89fb7fc8"} Feb 03 10:25:06 crc kubenswrapper[5010]: I0203 10:25:06.939037 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57c9d98597-wmwqg" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000183 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000330 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000344 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f7faa93-7520-4d4b-b153-ed311effd90b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000356 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f7faa93-7520-4d4b-b153-ed311effd90b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000367 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtzm\" (UniqueName: \"kubernetes.io/projected/7f771bc6-23e3-4382-89ea-f773805f789c-kube-api-access-qxtzm\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000384 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f771bc6-23e3-4382-89ea-f773805f789c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000398 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f771bc6-23e3-4382-89ea-f773805f789c-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000410 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f771bc6-23e3-4382-89ea-f773805f789c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000423 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr86z\" (UniqueName: \"kubernetes.io/projected/2f7faa93-7520-4d4b-b153-ed311effd90b-kube-api-access-cr86z\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.000434 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f7faa93-7520-4d4b-b153-ed311effd90b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.089396 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.098419 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6548998769-npmxc"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.118203 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.125853 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57c9d98597-wmwqg"] Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.378349 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.378512 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6l7tp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g6tdx_openstack(bad34e68-b20a-486c-b06b-e19f5aaaf917): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.379764 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g6tdx" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.961194 5010 generic.go:334] "Generic (PLEG): container finished" podID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerID="2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4" exitCode=0 Feb 03 10:25:07 crc kubenswrapper[5010]: I0203 10:25:07.961251 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerDied","Data":"2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4"} Feb 03 10:25:07 crc kubenswrapper[5010]: E0203 10:25:07.965248 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g6tdx" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.414145 5010 scope.go:117] "RemoveContainer" containerID="54d52bbf972f2c68c46beb0620a95b30135d78a71e1e999b8b262f72fafa7a37" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.443542 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.444195 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f846k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-b9wwp_openstack(1acc33e7-f3ae-4131-a003-aa6b592269c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:25:08 crc kubenswrapper[5010]: E0203 10:25:08.445593 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-b9wwp" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.519488 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7faa93-7520-4d4b-b153-ed311effd90b" path="/var/lib/kubelet/pods/2f7faa93-7520-4d4b-b153-ed311effd90b/volumes" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.520342 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f771bc6-23e3-4382-89ea-f773805f789c" path="/var/lib/kubelet/pods/7f771bc6-23e3-4382-89ea-f773805f789c/volumes" Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.857774 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:25:08 crc kubenswrapper[5010]: I0203 10:25:08.897325 5010 scope.go:117] "RemoveContainer" containerID="86940200a0f167ad56e8101970695c50456840462697eef05dc72062b5c839d7" Feb 03 10:25:08 crc kubenswrapper[5010]: W0203 10:25:08.900201 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede WatchSource:0}: Error finding container df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede: Status 404 returned error can't find the container with id df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.012260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede"} Feb 03 10:25:09 crc kubenswrapper[5010]: E0203 10:25:09.061544 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-b9wwp" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.070566 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:09 crc kubenswrapper[5010]: W0203 10:25:09.327757 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fedcc57_b16c_4177_a10e_f627269b4adb.slice/crio-76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d WatchSource:0}: Error finding container 76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d: Status 404 returned error can't find the container with id 76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.333252 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cc988db4-2mpfb"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.443775 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.560961 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.561056 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.561148 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") pod \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\" (UID: \"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c\") " Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.581567 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh" (OuterVolumeSpecName: "kube-api-access-tdlkh") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "kube-api-access-tdlkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.654381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.663351 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdlkh\" (UniqueName: \"kubernetes.io/projected/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-kube-api-access-tdlkh\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.665916 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.688798 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.717193 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config" (OuterVolumeSpecName: "config") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.767322 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.896974 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" (UID: "5c2a4fab-65d6-47ac-9829-2b5b5e8d412c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:09 crc kubenswrapper[5010]: I0203 10:25:09.970740 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.043853 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.053461 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.053522 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.066177 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"d4d81e3a7705c11b3d4b432eac5a8a598f0ea28d2b2cfb774c5c3a7b63578142"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.069585 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerStarted","Data":"e28ff655fe84bd57493957f3f09a3080ab17c5d462a3b8177036f3153667da0d"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.084369 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cdcd56868-k9h7g" podStartSLOduration=28.084350373 podStartE2EDuration="28.084350373s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:10.079523679 +0000 UTC m=+1380.235499808" watchObservedRunningTime="2026-02-03 10:25:10.084350373 +0000 UTC m=+1380.240326502" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.091504 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"09d80471a02be8b08b6c00cb53adbc75820f62dbcbe1bed30472a593dcfe57cb"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.166698 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"1d7ecd8900f582370f2aa2ea7d17e98fbb53211402ee75abd7707475bb689f68"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.167366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"76388283145b5851ac3db3834097f01fb292268a133c5db4f83b3ead8c57274d"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.200510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerStarted","Data":"9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268020 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268111 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerStarted","Data":"5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.268369 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5b4c5ff-x859r" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" containerID="cri-o://5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" gracePeriod=30 Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.269203 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b5b4c5ff-x859r" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" containerID="cri-o://1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" gracePeriod=30 Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.302633 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mvrf4" event={"ID":"5c2a4fab-65d6-47ac-9829-2b5b5e8d412c","Type":"ContainerDied","Data":"2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.306887 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mvrf4" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.313400 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.353127 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.387573 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b5b4c5ff-x859r" podStartSLOduration=4.422412738 podStartE2EDuration="34.38754674s" podCreationTimestamp="2026-02-03 10:24:36 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.415592094 +0000 UTC m=+1347.571568223" lastFinishedPulling="2026-02-03 10:25:07.380726096 +0000 UTC m=+1377.536702225" observedRunningTime="2026-02-03 10:25:10.320354867 +0000 UTC m=+1380.476331016" watchObservedRunningTime="2026-02-03 10:25:10.38754674 +0000 UTC m=+1380.543522869" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.390978 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426256 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426700 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="init" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426714 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="init" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426726 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426733 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.426751 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426758 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426939 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7535aa4-5a5e-4663-b9c5-7822d0836660" containerName="dnsmasq-dns" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.426961 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" containerName="neutron-db-sync" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.429057 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434069 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434181 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.434800 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.437374 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.437387 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j789z" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496145 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496307 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496500 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.496575 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.499141 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.501790 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: E0203 10:25:10.580114 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c2a4fab_65d6_47ac_9829_2b5b5e8d412c.slice/crio-2b0073ad8287411e1d59389e4452039e032d8e37832a1112a2e60a18196d8ae0\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598077 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598255 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598365 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598471 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598658 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598834 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.598965 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.599645 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.605548 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.605925 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.621225 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.629763 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.653786 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.654338 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.655908 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.656016 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"neutron-867995856-hbnv9\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.740667 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.741532 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.741834 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742337 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.742755 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.745830 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.748078 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.748618 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.752703 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.766154 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.820842 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"dnsmasq-dns-55f844cf75-v4m78\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.933426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:10 crc kubenswrapper[5010]: I0203 10:25:10.973037 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.386773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerStarted","Data":"9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.410842 5010 generic.go:334] "Generic (PLEG): container finished" podID="6195408a-292f-4e66-84a7-5007ba24c702" containerID="379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135" exitCode=0 Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.410932 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerDied","Data":"379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.411549 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tptfc" podStartSLOduration=5.38555528 podStartE2EDuration="38.411532018s" podCreationTimestamp="2026-02-03 10:24:33 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.143292014 +0000 UTC m=+1347.299268143" lastFinishedPulling="2026-02-03 10:25:10.169268752 +0000 UTC m=+1380.325244881" observedRunningTime="2026-02-03 10:25:11.407138614 +0000 UTC m=+1381.563114753" watchObservedRunningTime="2026-02-03 10:25:11.411532018 +0000 UTC m=+1381.567508237" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.432071 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.440069 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.482109 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cc988db4-2mpfb" podStartSLOduration=29.482086406 podStartE2EDuration="29.482086406s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:11.470932869 +0000 UTC m=+1381.626909018" watchObservedRunningTime="2026-02-03 10:25:11.482086406 +0000 UTC m=+1381.638062545" Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.488232 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerStarted","Data":"eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f"} Feb 03 10:25:11 crc kubenswrapper[5010]: I0203 10:25:11.519041 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-swx9t" podStartSLOduration=17.519021829 podStartE2EDuration="17.519021829s" podCreationTimestamp="2026-02-03 10:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:11.517260613 +0000 UTC m=+1381.673236762" watchObservedRunningTime="2026-02-03 10:25:11.519021829 +0000 UTC m=+1381.674997958" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.092270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:25:12 crc kubenswrapper[5010]: W0203 10:25:12.212790 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod800c4356_da72_47c4_9a83_5eeceacc7211.slice/crio-a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49 WatchSource:0}: Error finding container a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49: Status 404 returned error can't find the container with id a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49 Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.272591 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.493955 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.575539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerStarted","Data":"a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" event={"ID":"6195408a-292f-4e66-84a7-5007ba24c702","Type":"ContainerDied","Data":"e28ff655fe84bd57493957f3f09a3080ab17c5d462a3b8177036f3153667da0d"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578231 5010 scope.go:117] "RemoveContainer" containerID="379ab01e67ed33eb16a52d733d3fa47b3bc67d903a473cca21c2a2fbf2a80135" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.578382 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4g4n5" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.594998 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595052 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595089 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595140 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595226 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.595266 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") pod \"6195408a-292f-4e66-84a7-5007ba24c702\" (UID: \"6195408a-292f-4e66-84a7-5007ba24c702\") " Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.597197 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"5d57a17f6b627eededa0a21aa0ef2051ab13fadb63e9a5ef111d5cb1f8d96193"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.611952 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97"} Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.769176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv" (OuterVolumeSpecName: "kube-api-access-bgjdv") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "kube-api-access-bgjdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.805741 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.807063 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.838281 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.840777 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.871016 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.873969 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjdv\" (UniqueName: \"kubernetes.io/projected/6195408a-292f-4e66-84a7-5007ba24c702-kube-api-access-bgjdv\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.874008 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.906472 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.914785 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.976147 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:12 crc kubenswrapper[5010]: I0203 10:25:12.976176 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.053248 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config" (OuterVolumeSpecName: "config") pod "6195408a-292f-4e66-84a7-5007ba24c702" (UID: "6195408a-292f-4e66-84a7-5007ba24c702"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.079748 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6195408a-292f-4e66-84a7-5007ba24c702-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.124939 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.124997 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.220954 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:13 crc kubenswrapper[5010]: E0203 10:25:13.222164 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.222184 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.222761 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6195408a-292f-4e66-84a7-5007ba24c702" containerName="init" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.224627 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.248483 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.248747 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.267788 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.298036 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.319208 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4g4n5"] Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.388863 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.388980 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389088 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389128 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389163 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.389256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.491943 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.492198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.498017 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.498578 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.500305 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.503799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.509714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.514472 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.521947 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"neutron-58c5b6f6cc-94dq7\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:13 crc kubenswrapper[5010]: I0203 10:25:13.610991 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.523188 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6195408a-292f-4e66-84a7-5007ba24c702" path="/var/lib/kubelet/pods/6195408a-292f-4e66-84a7-5007ba24c702/volumes" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.649283 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:14 crc kubenswrapper[5010]: W0203 10:25:14.656290 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31521b0f_9e4f_4cfc_b0e8_e9e2bd2ca688.slice/crio-b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68 WatchSource:0}: Error finding container b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68: Status 404 returned error can't find the container with id b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.663375 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerStarted","Data":"04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.663558 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" containerID="cri-o://6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.667138 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" containerID="cri-o://04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.727456 5010 generic.go:334] "Generic (PLEG): container finished" podID="800c4356-da72-47c4-9a83-5eeceacc7211" containerID="e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314" exitCode=0 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.727662 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.730825 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.730798846 podStartE2EDuration="32.730798846s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.726950207 +0000 UTC m=+1384.882926336" watchObservedRunningTime="2026-02-03 10:25:14.730798846 +0000 UTC m=+1384.886774975" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.780865 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerStarted","Data":"b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.781160 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" containerID="cri-o://8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.781671 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" containerID="cri-o://b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" gracePeriod=30 Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.789941 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.789992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerStarted","Data":"61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9"} Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.790081 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:14 crc kubenswrapper[5010]: I0203 10:25:14.834959 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.83492333 podStartE2EDuration="32.83492333s" podCreationTimestamp="2026-02-03 10:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.815603062 +0000 UTC m=+1384.971579191" watchObservedRunningTime="2026-02-03 10:25:14.83492333 +0000 UTC m=+1384.990899459" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.228635 5010 generic.go:334] "Generic (PLEG): container finished" podID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerID="04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" exitCode=0 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.261505 5010 generic.go:334] "Generic (PLEG): container finished" podID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerID="6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" exitCode=143 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.232346 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.262163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.325607 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.326034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.346084 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerStarted","Data":"d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.346924 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.372396 5010 generic.go:334] "Generic (PLEG): container finished" podID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerID="b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" exitCode=0 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.372455 5010 generic.go:334] "Generic (PLEG): container finished" podID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerID="8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" exitCode=143 Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.373981 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.374049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76"} Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.397278 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" podStartSLOduration=6.397209114 podStartE2EDuration="6.397209114s" podCreationTimestamp="2026-02-03 10:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:16.394610727 +0000 UTC m=+1386.550586856" watchObservedRunningTime="2026-02-03 10:25:16.397209114 +0000 UTC m=+1386.553185243" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.400578 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-867995856-hbnv9" podStartSLOduration=6.40055684 podStartE2EDuration="6.40055684s" podCreationTimestamp="2026-02-03 10:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:14.859140804 +0000 UTC m=+1385.015116943" watchObservedRunningTime="2026-02-03 10:25:16.40055684 +0000 UTC m=+1386.556532979" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.513594 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595013 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595134 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595195 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595395 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595766 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.595928 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.596094 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\" (UID: \"c01a7e05-aa67-4606-9a08-c7a91dd9b332\") " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.608109 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs" (OuterVolumeSpecName: "logs") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.630432 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.637474 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts" (OuterVolumeSpecName: "scripts") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.680545 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv" (OuterVolumeSpecName: "kube-api-access-qhwkv") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "kube-api-access-qhwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.680712 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707499 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707826 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707887 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwkv\" (UniqueName: \"kubernetes.io/projected/c01a7e05-aa67-4606-9a08-c7a91dd9b332-kube-api-access-qhwkv\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.707944 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.708006 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c01a7e05-aa67-4606-9a08-c7a91dd9b332-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.737747 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.738154 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.783590 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.811934 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:16 crc kubenswrapper[5010]: I0203 10:25:16.812000 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:16.821672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data" (OuterVolumeSpecName: "config-data") pod "c01a7e05-aa67-4606-9a08-c7a91dd9b332" (UID: "c01a7e05-aa67-4606-9a08-c7a91dd9b332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.203254 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230288 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230734 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230768 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.230845 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") pod \"e731f56b-df87-43c2-9b58-dcb496df80c9\" (UID: \"e731f56b-df87-43c2-9b58-dcb496df80c9\") " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.231429 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c01a7e05-aa67-4606-9a08-c7a91dd9b332-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.232843 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs" (OuterVolumeSpecName: "logs") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.233273 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.238487 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776" (OuterVolumeSpecName: "kube-api-access-q6776") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "kube-api-access-q6776". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.242684 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts" (OuterVolumeSpecName: "scripts") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.258787 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.322875 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.331627 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data" (OuterVolumeSpecName: "config-data") pod "e731f56b-df87-43c2-9b58-dcb496df80c9" (UID: "e731f56b-df87-43c2-9b58-dcb496df80c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.335808 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337655 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337745 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6776\" (UniqueName: \"kubernetes.io/projected/e731f56b-df87-43c2-9b58-dcb496df80c9-kube-api-access-q6776\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337852 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.337974 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.338036 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e731f56b-df87-43c2-9b58-dcb496df80c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.338111 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e731f56b-df87-43c2-9b58-dcb496df80c9-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.393815 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.433921 5010 generic.go:334] "Generic (PLEG): container finished" podID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerID="9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81" exitCode=0 Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.434096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerDied","Data":"9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.443808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e731f56b-df87-43c2-9b58-dcb496df80c9","Type":"ContainerDied","Data":"09d80471a02be8b08b6c00cb53adbc75820f62dbcbe1bed30472a593dcfe57cb"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.444176 5010 scope.go:117] "RemoveContainer" containerID="b4e4a1e6a2630ad64ab7d63e96ac55cace7d3a6b86ca6cfcc1a22bf419376de0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.444525 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.447152 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.456101 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c01a7e05-aa67-4606-9a08-c7a91dd9b332","Type":"ContainerDied","Data":"d4d81e3a7705c11b3d4b432eac5a8a598f0ea28d2b2cfb774c5c3a7b63578142"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.456409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.498222 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerStarted","Data":"e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded"} Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.499527 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.539591 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58c5b6f6cc-94dq7" podStartSLOduration=4.539554783 podStartE2EDuration="4.539554783s" podCreationTimestamp="2026-02-03 10:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:17.52235622 +0000 UTC m=+1387.678332359" watchObservedRunningTime="2026-02-03 10:25:17.539554783 +0000 UTC m=+1387.695530922" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.625652 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.647872 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.685917 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.707956 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.712657 5010 scope.go:117] "RemoveContainer" containerID="8a5453edee79c0d75e7ddeabeb025c5dee661893de0985e382bb10724d267f76" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.731321 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732137 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732171 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732195 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732205 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732242 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732252 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: E0203 10:25:17.732268 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732276 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732537 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732608 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732654 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-log" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.732668 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" containerName="glance-httpd" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.733949 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741634 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741816 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.741975 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.744375 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.754184 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.758747 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.762734 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764488 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764565 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764628 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764655 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764853 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.764876 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.767871 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.779643 5010 scope.go:117] "RemoveContainer" containerID="04f1ed0eb618ead4dfd5e192e6cbd45c7a42c68a8906bfc9878f7864e6544b0e" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874395 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874521 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874674 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874745 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.874797 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.877188 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.877468 5010 scope.go:117] "RemoveContainer" containerID="6700db575ba245cd84da8dd0d6b288edc79eb5817a450848a4a630c96ccb0a97" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.878175 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.881314 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.896520 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.898087 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.914192 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.917117 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.917538 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.925532 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980352 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980530 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980591 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980713 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980751 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980789 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:17 crc kubenswrapper[5010]: I0203 10:25:17.980823 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.064637 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083128 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083329 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083491 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083614 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083668 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.083739 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.084049 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.085089 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.089174 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.092418 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.098415 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.103305 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.105064 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.121051 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.152547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.504370 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.544503 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c01a7e05-aa67-4606-9a08-c7a91dd9b332" path="/var/lib/kubelet/pods/c01a7e05-aa67-4606-9a08-c7a91dd9b332/volumes" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.546412 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e731f56b-df87-43c2-9b58-dcb496df80c9" path="/var/lib/kubelet/pods/e731f56b-df87-43c2-9b58-dcb496df80c9/volumes" Feb 03 10:25:18 crc kubenswrapper[5010]: I0203 10:25:18.858374 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.373489 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.379323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:25:19 crc kubenswrapper[5010]: W0203 10:25:19.407964 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef87127_760d_4f81_8a78_a06d074c7ec3.slice/crio-6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288 WatchSource:0}: Error finding container 6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288: Status 404 returned error can't find the container with id 6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288 Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489695 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489890 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489932 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.489993 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") pod \"29ef610c-3c09-4b27-9b97-3a5350388caa\" (UID: \"29ef610c-3c09-4b27-9b97-3a5350388caa\") " Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.492818 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs" (OuterVolumeSpecName: "logs") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.511022 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts" (OuterVolumeSpecName: "scripts") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.511336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f" (OuterVolumeSpecName: "kube-api-access-wcm2f") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "kube-api-access-wcm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.558145 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data" (OuterVolumeSpecName: "config-data") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.584795 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ef610c-3c09-4b27-9b97-3a5350388caa" (UID: "29ef610c-3c09-4b27-9b97-3a5350388caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595458 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595553 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcm2f\" (UniqueName: \"kubernetes.io/projected/29ef610c-3c09-4b27-9b97-3a5350388caa-kube-api-access-wcm2f\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595572 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29ef610c-3c09-4b27-9b97-3a5350388caa-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595585 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.595596 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ef610c-3c09-4b27-9b97-3a5350388caa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.659675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"1764b6a93e3f3ed5e01b4b46981d2b3555284f7ada6ea1b560610775c21c68d5"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665695 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tptfc" event={"ID":"29ef610c-3c09-4b27-9b97-3a5350388caa","Type":"ContainerDied","Data":"8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665759 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dff0c755a50d3ce83f3790da9a77abbdd3719d09b62bae731558162867118c1" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.665767 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tptfc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.681548 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288"} Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.743804 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:19 crc kubenswrapper[5010]: E0203 10:25:19.744500 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.744522 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.744747 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" containerName="placement-db-sync" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.746140 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.753953 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.754330 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.754507 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.755041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtdfs" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.755086 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.765782 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902842 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902917 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.902969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903018 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903049 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903079 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:19 crc kubenswrapper[5010]: I0203 10:25:19.903139 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.368102 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377363 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377519 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.377605 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.385974 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.390494 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.393963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.397565 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.397754 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.398420 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.421976 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.435236 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.444203 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"placement-7f744c8944-2zwzr\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.688832 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.721017 5010 generic.go:334] "Generic (PLEG): container finished" podID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerID="eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f" exitCode=0 Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.720930 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerDied","Data":"eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f"} Feb 03 10:25:20 crc kubenswrapper[5010]: I0203 10:25:20.978517 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.071008 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.071429 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" containerID="cri-o://c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" gracePeriod=10 Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.378976 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.552604 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.796056 5010 generic.go:334] "Generic (PLEG): container finished" podID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerID="c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" exitCode=0 Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.796737 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.803992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.812687 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.822501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"089e9b9bfea0632f8dc13a626391ff9a317374bb6a62f576e2749c15e06ebc0d"} Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.853008 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907420 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907487 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907533 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.907592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") pod \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\" (UID: \"9eb55fd4-6f97-47c3-bd98-89ca6331cf88\") " Feb 03 10:25:21 crc kubenswrapper[5010]: I0203 10:25:21.924655 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc" (OuterVolumeSpecName: "kube-api-access-zqnmc") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "kube-api-access-zqnmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.015756 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqnmc\" (UniqueName: \"kubernetes.io/projected/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-kube-api-access-zqnmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.076712 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.118442 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.397568 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.405616 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.412171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config" (OuterVolumeSpecName: "config") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.421917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9eb55fd4-6f97-47c3-bd98-89ca6331cf88" (UID: "9eb55fd4-6f97-47c3-bd98-89ca6331cf88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437897 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437944 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437960 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.437974 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eb55fd4-6f97-47c3-bd98-89ca6331cf88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.716025 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.806261 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852392 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852534 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852580 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852611 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852795 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.852845 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") pod \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\" (UID: \"457510b3-7c5a-456d-9df3-54fa7dee8c4b\") " Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.868081 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.872976 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts" (OuterVolumeSpecName: "scripts") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.877582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc" (OuterVolumeSpecName: "kube-api-access-jk8xc") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "kube-api-access-jk8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879713 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" event={"ID":"9eb55fd4-6f97-47c3-bd98-89ca6331cf88","Type":"ContainerDied","Data":"93d0e004e008b5e1b05321fcaf14211b090b2038acd1b389851fdfc6ab3c1331"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879780 5010 scope.go:117] "RemoveContainer" containerID="c9a7cc65c09b93f157cada4e0c074bf50be6834a16b4169ebac2602a35731c7e" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.879997 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.883445 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.886002 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.890817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerStarted","Data":"25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902515 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-swx9t" event={"ID":"457510b3-7c5a-456d-9df3-54fa7dee8c4b","Type":"ContainerDied","Data":"9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693"} Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902587 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb617f937270e1fe6e444469ff83627ed35fc24df5672358eff75f2893f7693" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.902693 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-swx9t" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956046 5010 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956093 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk8xc\" (UniqueName: \"kubernetes.io/projected/457510b3-7c5a-456d-9df3-54fa7dee8c4b-kube-api-access-jk8xc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956108 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.956118 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.961259 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962033 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962068 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="init" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962105 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="init" Feb 03 10:25:22 crc kubenswrapper[5010]: E0203 10:25:22.962119 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.962130 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.963873 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" containerName="keystone-bootstrap" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.963924 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.964913 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.969270 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 10:25:22 crc kubenswrapper[5010]: I0203 10:25:22.981020 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.023297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.023256949 podStartE2EDuration="6.023256949s" podCreationTimestamp="2026-02-03 10:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:22.927910261 +0000 UTC m=+1393.083886390" watchObservedRunningTime="2026-02-03 10:25:23.023256949 +0000 UTC m=+1393.179233078" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.039807 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.045654 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058138 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058562 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.058813 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059005 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059162 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059448 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059514 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.059774 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.096611 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data" (OuterVolumeSpecName: "config-data") pod "457510b3-7c5a-456d-9df3-54fa7dee8c4b" (UID: "457510b3-7c5a-456d-9df3-54fa7dee8c4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.128313 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162403 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162607 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162646 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162676 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162756 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.162814 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/457510b3-7c5a-456d-9df3-54fa7dee8c4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.172522 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-combined-ca-bundle\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.173767 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-scripts\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.178321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-fernet-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.200243 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-credential-keys\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.201087 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-public-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.204178 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-config-data\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.209173 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-internal-tls-certs\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.231131 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95h6\" (UniqueName: \"kubernetes.io/projected/8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4-kube-api-access-n95h6\") pod \"keystone-675cc696d4-7wvtv\" (UID: \"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4\") " pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.337334 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.339535 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.348868 5010 scope.go:117] "RemoveContainer" containerID="9870cb3be829d265aa30927c41a48cc7802f5d65aec23cea9f8bcd10b02b6b19" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.359259 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-tpx4x"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.800269 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.803192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.883129 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.922938 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923140 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923329 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:23 crc kubenswrapper[5010]: I0203 10:25:23.923761 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006256 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerStarted","Data":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006843 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.006995 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.030732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.030924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031087 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031144 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031181 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031245 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.031288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.032332 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-logs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.040999 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerStarted","Data":"9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.046920 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerStarted","Data":"56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1"} Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.070449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-config-data\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.071322 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-combined-ca-bundle\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.072669 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-scripts\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.077793 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-internal-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.085688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-public-tls-certs\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.089536 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f744c8944-2zwzr" podStartSLOduration=5.089493756 podStartE2EDuration="5.089493756s" podCreationTimestamp="2026-02-03 10:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:24.049237268 +0000 UTC m=+1394.205213397" watchObservedRunningTime="2026-02-03 10:25:24.089493756 +0000 UTC m=+1394.245469885" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.090133 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898jq\" (UniqueName: \"kubernetes.io/projected/3ecd94c1-1faa-4acd-aa24-dd54388d2d99-kube-api-access-898jq\") pod \"placement-bc6c5cf68-f9b4p\" (UID: \"3ecd94c1-1faa-4acd-aa24-dd54388d2d99\") " pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.101658 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-675cc696d4-7wvtv"] Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.150978 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.15094334 podStartE2EDuration="7.15094334s" podCreationTimestamp="2026-02-03 10:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:24.102674316 +0000 UTC m=+1394.258650445" watchObservedRunningTime="2026-02-03 10:25:24.15094334 +0000 UTC m=+1394.306919469" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.165953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.212682 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g6tdx" podStartSLOduration=6.911040575 podStartE2EDuration="52.212651171s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="2026-02-03 10:24:36.749046051 +0000 UTC m=+1346.905022180" lastFinishedPulling="2026-02-03 10:25:22.050656647 +0000 UTC m=+1392.206632776" observedRunningTime="2026-02-03 10:25:24.135490882 +0000 UTC m=+1394.291467011" watchObservedRunningTime="2026-02-03 10:25:24.212651171 +0000 UTC m=+1394.368627300" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.593432 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" path="/var/lib/kubelet/pods/9eb55fd4-6f97-47c3-bd98-89ca6331cf88/volumes" Feb 03 10:25:24 crc kubenswrapper[5010]: I0203 10:25:24.923233 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bc6c5cf68-f9b4p"] Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.061441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-675cc696d4-7wvtv" event={"ID":"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4","Type":"ContainerStarted","Data":"d6dac3e484a005977351cb033c83c44ebc6eb341c4e0affdfc49420dab5add60"} Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.061527 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-675cc696d4-7wvtv" event={"ID":"8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4","Type":"ContainerStarted","Data":"d12ca4ec55cc75e892ab98ddfbd2ac34d23b60e39019acf45130c87cd0b772e5"} Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.062258 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:25 crc kubenswrapper[5010]: I0203 10:25:25.112251 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-675cc696d4-7wvtv" podStartSLOduration=3.112177091 podStartE2EDuration="3.112177091s" podCreationTimestamp="2026-02-03 10:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:25.103158468 +0000 UTC m=+1395.259134597" watchObservedRunningTime="2026-02-03 10:25:25.112177091 +0000 UTC m=+1395.268153230" Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.085634 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerStarted","Data":"90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940"} Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.115790 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-b9wwp" podStartSLOduration=7.147739497 podStartE2EDuration="54.115764252s" podCreationTimestamp="2026-02-03 10:24:32 +0000 UTC" firstStartedPulling="2026-02-03 10:24:37.165572258 +0000 UTC m=+1347.321548387" lastFinishedPulling="2026-02-03 10:25:24.133597013 +0000 UTC m=+1394.289573142" observedRunningTime="2026-02-03 10:25:26.114487919 +0000 UTC m=+1396.270464048" watchObservedRunningTime="2026-02-03 10:25:26.115764252 +0000 UTC m=+1396.271740381" Feb 03 10:25:26 crc kubenswrapper[5010]: I0203 10:25:26.787707 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-tpx4x" podUID="9eb55fd4-6f97-47c3-bd98-89ca6331cf88" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Feb 03 10:25:27 crc kubenswrapper[5010]: W0203 10:25:27.873899 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ecd94c1_1faa_4acd_aa24_dd54388d2d99.slice/crio-18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912 WatchSource:0}: Error finding container 18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912: Status 404 returned error can't find the container with id 18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912 Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.066626 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.066697 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.123410 5010 generic.go:334] "Generic (PLEG): container finished" podID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerID="56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1" exitCode=0 Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.123541 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerDied","Data":"56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1"} Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.128590 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"18568e750adc664c2b522c22bba83c2766ecf2703b1e46b06ebeeaeaf7db2912"} Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.134168 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.135001 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.139230 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.519267 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.519870 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.625802 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.702256 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.892444 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.895950 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.916353 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995187 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:28 crc kubenswrapper[5010]: I0203 10:25:28.995508 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098323 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098464 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.098649 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.099449 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.100379 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.145539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"certified-operators-zcvn8\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.159355 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerStarted","Data":"66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.181841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"da38cfd4d210ad528e6beb9b5e12f4d4bc0d000ce5c9371a1f32e78184a92b06"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.181935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bc6c5cf68-f9b4p" event={"ID":"3ecd94c1-1faa-4acd-aa24-dd54388d2d99","Type":"ContainerStarted","Data":"aff1156efc4d495549c8c433efd558b598018579116ec1c91dc8694fdccf0411"} Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182183 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182851 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182916 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182936 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.182951 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.270876 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bc6c5cf68-f9b4p" podStartSLOduration=6.270847299 podStartE2EDuration="6.270847299s" podCreationTimestamp="2026-02-03 10:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:29.228709982 +0000 UTC m=+1399.384686111" watchObservedRunningTime="2026-02-03 10:25:29.270847299 +0000 UTC m=+1399.426823428" Feb 03 10:25:29 crc kubenswrapper[5010]: I0203 10:25:29.286622 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:30 crc kubenswrapper[5010]: I0203 10:25:30.237524 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:30 crc kubenswrapper[5010]: I0203 10:25:30.936563 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000518 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000618 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.000651 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") pod \"bad34e68-b20a-486c-b06b-e19f5aaaf917\" (UID: \"bad34e68-b20a-486c-b06b-e19f5aaaf917\") " Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.032525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp" (OuterVolumeSpecName: "kube-api-access-6l7tp") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "kube-api-access-6l7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.053561 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.065656 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bad34e68-b20a-486c-b06b-e19f5aaaf917" (UID: "bad34e68-b20a-486c-b06b-e19f5aaaf917"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105404 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105467 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l7tp\" (UniqueName: \"kubernetes.io/projected/bad34e68-b20a-486c-b06b-e19f5aaaf917-kube-api-access-6l7tp\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.105483 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bad34e68-b20a-486c-b06b-e19f5aaaf917-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.203903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.286581 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"e35e681b91c0a3ba4c5e23b8c2426b406cc51121c6807c30d998f313924cb39e"} Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308034 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g6tdx" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308124 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g6tdx" event={"ID":"bad34e68-b20a-486c-b06b-e19f5aaaf917","Type":"ContainerDied","Data":"a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e"} Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308169 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308751 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.308973 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.310340 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: I0203 10:25:31.310356 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:31 crc kubenswrapper[5010]: E0203 10:25:31.813132 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad34e68_b20a_486c_b06b_e19f5aaaf917.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad34e68_b20a_486c_b06b_e19f5aaaf917.slice/crio-a9d5da882cdcbed71ee51c06f06cb45291d0d12cebefa2201b69150f2363476e\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.357196 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa" exitCode=0 Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.357826 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa"} Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.387538 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:32 crc kubenswrapper[5010]: E0203 10:25:32.392692 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.392743 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.393384 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" containerName="barbican-db-sync" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.395117 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.411913 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.412206 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-j94mw" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.420614 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.427732 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.430647 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.435792 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.446334 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.490415 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.496416 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.496875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.497180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.582407 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.584588 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.609736 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.609866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610025 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610112 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610186 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610272 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610346 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.610407 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.704383 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-logs\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714160 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714188 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714268 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714307 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714359 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714377 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714400 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.714525 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.718922 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.734422 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.740319 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f377630f-64f3-4fd9-8449-53d739d775c2-logs\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.743874 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data-custom\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.755632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-config-data\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.758002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-combined-ca-bundle\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.762160 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvtk\" (UniqueName: \"kubernetes.io/projected/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-kube-api-access-bfvtk\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.762746 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb276c1-b6b3-45ef-84be-8bae1d46d9d7-config-data-custom\") pod \"barbican-worker-6bdd746887-zr9j6\" (UID: \"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7\") " pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.767927 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttwxh\" (UniqueName: \"kubernetes.io/projected/f377630f-64f3-4fd9-8449-53d739d775c2-kube-api-access-ttwxh\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.772006 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f377630f-64f3-4fd9-8449-53d739d775c2-combined-ca-bundle\") pod \"barbican-keystone-listener-85855ff49d-76x8k\" (UID: \"f377630f-64f3-4fd9-8449-53d739d775c2\") " pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:32 crc kubenswrapper[5010]: I0203 10:25:32.795270 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.806317 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.819877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.819968 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820002 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820101 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:32.820294 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.129873 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bdd746887-zr9j6" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.138437 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.139433 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.143393 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.143826 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.144066 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.147756 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.193411 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"dnsmasq-dns-85ff748b95-cxfv2\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.206482 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.211489 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.216236 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.325806 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.325878 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326183 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.326364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.374189 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.430949 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431089 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.431172 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.434585 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.437500 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.439389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.451767 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.459366 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.486553 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"barbican-api-595698fff8-qzxdr\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.491886 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:33 crc kubenswrapper[5010]: I0203 10:25:33.790114 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.489129 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bdd746887-zr9j6"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.834003 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.865257 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-85855ff49d-76x8k"] Feb 03 10:25:34 crc kubenswrapper[5010]: I0203 10:25:34.942170 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.532764 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"3ad54f8c0bff3944cbe9d84e2b81608a6422ca7d9fbcefab4a5dad88134db118"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.538553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"e7c5b8603827c99eb651153c65ffaba2307d6463e666112bb27572afc0a364ba"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.545990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.556439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"276b5ede8be32b2fcd5e4dea2a354a0412bc1e3d512cddd2da2cb8731f6a5abd"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.573289 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerStarted","Data":"551880a184d3cea9debb67a96e769d028b7329cfb831b90c16d9edf472195a6b"} Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.854528 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:35 crc kubenswrapper[5010]: I0203 10:25:35.854740 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.499379 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.500163 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.501514 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.644988 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.833533 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6" exitCode=0 Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.833660 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.840709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.840777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerStarted","Data":"e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e"} Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.843091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.843154 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.864443 5010 generic.go:334] "Generic (PLEG): container finished" podID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerID="2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95" exitCode=0 Feb 03 10:25:36 crc kubenswrapper[5010]: I0203 10:25:36.865455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.029700 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-595698fff8-qzxdr" podStartSLOduration=4.029642674 podStartE2EDuration="4.029642674s" podCreationTimestamp="2026-02-03 10:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:36.947725913 +0000 UTC m=+1407.103702042" watchObservedRunningTime="2026-02-03 10:25:37.029642674 +0000 UTC m=+1407.185618813" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.498111 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.501393 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.507029 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.507353 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.521634 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552523 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552770 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552904 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.552959 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655288 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655701 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.655751 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.662799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-logs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.667975 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.668015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-internal-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.681262 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-config-data-custom\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.682471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-combined-ca-bundle\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.684602 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-public-tls-certs\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.691951 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2h2r\" (UniqueName: \"kubernetes.io/projected/3bab826b-af5f-4bd1-a68a-0bdda5f89d80-kube-api-access-k2h2r\") pod \"barbican-api-6f67746f54-2l6b9\" (UID: \"3bab826b-af5f-4bd1-a68a-0bdda5f89d80\") " pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.835982 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.888179 5010 generic.go:334] "Generic (PLEG): container finished" podID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerID="90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940" exitCode=0 Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.888347 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerDied","Data":"90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.900328 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerStarted","Data":"bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83"} Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.900392 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:37 crc kubenswrapper[5010]: I0203 10:25:37.950628 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" podStartSLOduration=5.950601296 podStartE2EDuration="5.950601296s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:37.936170354 +0000 UTC m=+1408.092146483" watchObservedRunningTime="2026-02-03 10:25:37.950601296 +0000 UTC m=+1408.106577425" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.437150 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505131 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505361 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.505497 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506631 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506684 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") pod \"1acc33e7-f3ae-4131-a003-aa6b592269c6\" (UID: \"1acc33e7-f3ae-4131-a003-aa6b592269c6\") " Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.506836 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.507313 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1acc33e7-f3ae-4131-a003-aa6b592269c6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.517180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k" (OuterVolumeSpecName: "kube-api-access-f846k") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "kube-api-access-f846k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.519746 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts" (OuterVolumeSpecName: "scripts") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.523667 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.561724 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.610247 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611049 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f846k\" (UniqueName: \"kubernetes.io/projected/1acc33e7-f3ae-4131-a003-aa6b592269c6-kube-api-access-f846k\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611141 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.611339 5010 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.610332 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data" (OuterVolumeSpecName: "config-data") pod "1acc33e7-f3ae-4131-a003-aa6b592269c6" (UID: "1acc33e7-f3ae-4131-a003-aa6b592269c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.663032 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f67746f54-2l6b9"] Feb 03 10:25:39 crc kubenswrapper[5010]: W0203 10:25:39.677724 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bab826b_af5f_4bd1_a68a_0bdda5f89d80.slice/crio-afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018 WatchSource:0}: Error finding container afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018: Status 404 returned error can't find the container with id afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018 Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.714104 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acc33e7-f3ae-4131-a003-aa6b592269c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.940252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"079eb74ecfdda51918da6c05552d51a853d958a4a620100baab1538f28f5e1a5"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.940310 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" event={"ID":"f377630f-64f3-4fd9-8449-53d739d775c2","Type":"ContainerStarted","Data":"9c4c6374d0b4cf420352015671ce87dd26cc2f7b9e3ef6b958122f72004ad8f7"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.968458 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"74ccd033c9e2884f72e2c4c1b6c4e0e23117a22a85159de4838754ce36874bb7"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.968539 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"afff48a1e1ecd4c286603cd076555cc140e616a151373500817c0a06f61bf018"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.988137 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"1766ead65b13e47af68980b44ad86d632e4554f234b2ac1717f8ed7db11a09c1"} Feb 03 10:25:39 crc kubenswrapper[5010]: I0203 10:25:39.988198 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bdd746887-zr9j6" event={"ID":"4cb276c1-b6b3-45ef-84be-8bae1d46d9d7","Type":"ContainerStarted","Data":"ec158f66c9b3c707bfeee50c71073878189b9fd5415bd191cd57d56e768c8590"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.002747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerStarted","Data":"8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.010606 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-b9wwp" event={"ID":"1acc33e7-f3ae-4131-a003-aa6b592269c6","Type":"ContainerDied","Data":"dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a"} Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.010926 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbb37a8fd2f82ef82d966d8287692e503ed1134f141d666defaaf1447e6aa0a" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.011142 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-b9wwp" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.108207 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-85855ff49d-76x8k" podStartSLOduration=4.082653492 podStartE2EDuration="8.108167677s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="2026-02-03 10:25:35.004523029 +0000 UTC m=+1405.160499158" lastFinishedPulling="2026-02-03 10:25:39.030037214 +0000 UTC m=+1409.186013343" observedRunningTime="2026-02-03 10:25:39.985723991 +0000 UTC m=+1410.141700120" watchObservedRunningTime="2026-02-03 10:25:40.108167677 +0000 UTC m=+1410.264143806" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.139461 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6bdd746887-zr9j6" podStartSLOduration=3.687187487 podStartE2EDuration="8.139422283s" podCreationTimestamp="2026-02-03 10:25:32 +0000 UTC" firstStartedPulling="2026-02-03 10:25:34.577699575 +0000 UTC m=+1404.733675704" lastFinishedPulling="2026-02-03 10:25:39.029934371 +0000 UTC m=+1409.185910500" observedRunningTime="2026-02-03 10:25:40.021192285 +0000 UTC m=+1410.177168414" watchObservedRunningTime="2026-02-03 10:25:40.139422283 +0000 UTC m=+1410.295398412" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.220798 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zcvn8" podStartSLOduration=5.56384366 podStartE2EDuration="12.22075683s" podCreationTimestamp="2026-02-03 10:25:28 +0000 UTC" firstStartedPulling="2026-02-03 10:25:32.371951993 +0000 UTC m=+1402.527928122" lastFinishedPulling="2026-02-03 10:25:39.028865163 +0000 UTC m=+1409.184841292" observedRunningTime="2026-02-03 10:25:40.0597629 +0000 UTC m=+1410.215739029" watchObservedRunningTime="2026-02-03 10:25:40.22075683 +0000 UTC m=+1410.376732959" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.369983 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: E0203 10:25:40.371169 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.371191 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.371476 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" containerName="cinder-db-sync" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.416943 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.428532 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.428941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.431238 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.441443 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gk5q6" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.443115 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.455892 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.456362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.456472 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.460726 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.460896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.461395 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.550966 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.551336 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" containerID="cri-o://bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" gracePeriod=10 Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564208 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564350 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564370 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564398 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.564420 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.575825 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.592015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.592240 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.595859 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.611242 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.627375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"cinder-scheduler-0\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.645364 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.648267 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.659838 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.662363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.667169 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.675413 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.692276 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787481 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787511 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787669 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.787868 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788040 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788079 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788249 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788336 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788623 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.788727 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.821276 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.892201 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.892995 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893047 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893143 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893476 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.895927 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.897526 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.893206 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898003 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898070 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898157 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898251 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898591 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898651 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.898776 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.900502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.901163 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.902229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.922761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.928462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.936579 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.939730 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.941189 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.944709 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"cinder-api-0\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " pod="openstack/cinder-api-0" Feb 03 10:25:40 crc kubenswrapper[5010]: I0203 10:25:40.977805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"dnsmasq-dns-5c9776ccc5-6vbfz\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.021534 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.137881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.148912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f67746f54-2l6b9" event={"ID":"3bab826b-af5f-4bd1-a68a-0bdda5f89d80","Type":"ContainerStarted","Data":"f91c84248ede11ce656d67a63993f3673baa475b80485b6b3e89ecf47a959661"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.162741 5010 generic.go:334] "Generic (PLEG): container finished" podID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerID="bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" exitCode=0 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.162947 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.171872 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.189624 5010 generic.go:334] "Generic (PLEG): container finished" podID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerID="1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" exitCode=137 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.189676 5010 generic.go:334] "Generic (PLEG): container finished" podID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerID="5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" exitCode=137 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.191121 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.191167 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4"} Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.204940 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f67746f54-2l6b9" podStartSLOduration=4.20490728 podStartE2EDuration="4.20490728s" podCreationTimestamp="2026-02-03 10:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:41.186255019 +0000 UTC m=+1411.342231158" watchObservedRunningTime="2026-02-03 10:25:41.20490728 +0000 UTC m=+1411.360883419" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.637823 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.638685 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" containerID="cri-o://f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" gracePeriod=30 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.640105 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" containerID="cri-o://e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" gracePeriod=30 Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.696096 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.704514 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.711846 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.724316 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854379 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854475 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854634 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854841 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.854924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") pod \"73d76595-42a6-4756-a5c5-7135fe150f1e\" (UID: \"73d76595-42a6-4756-a5c5-7135fe150f1e\") " Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864245 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864584 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.864985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865092 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865294 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.865337 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.866034 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.898650 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc" (OuterVolumeSpecName: "kube-api-access-tngbc") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "kube-api-access-tngbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.981461 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.982706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983400 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983662 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.983846 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:41 crc kubenswrapper[5010]: I0203 10:25:41.984178 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tngbc\" (UniqueName: \"kubernetes.io/projected/73d76595-42a6-4756-a5c5-7135fe150f1e-kube-api-access-tngbc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.057881 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-ovndb-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.058880 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-combined-ca-bundle\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.069306 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-httpd-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.069938 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-public-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.070409 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": read tcp 10.217.0.2:38112->10.217.0.150:9696: read: connection reset by peer" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.071797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncb9c\" (UniqueName: \"kubernetes.io/projected/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-kube-api-access-ncb9c\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.076075 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-config\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.112288 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158ac65e-849e-4f85-a4b6-1ac4bde1a1ec-internal-tls-certs\") pod \"neutron-78c78c7889-r9575\" (UID: \"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec\") " pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.198572 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.198590 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.232417 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298132 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cxfv2" event={"ID":"73d76595-42a6-4756-a5c5-7135fe150f1e","Type":"ContainerDied","Data":"551880a184d3cea9debb67a96e769d028b7329cfb831b90c16d9edf472195a6b"} Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.298366 5010 scope.go:117] "RemoveContainer" containerID="bcf09a9582e13a71a798c91df881d34f9629fd8355c0382e4f0464933e875d83" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.299079 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.299157 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303193 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303269 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.303282 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.308167 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config" (OuterVolumeSpecName: "config") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.317868 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73d76595-42a6-4756-a5c5-7135fe150f1e" (UID: "73d76595-42a6-4756-a5c5-7135fe150f1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.373928 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.415483 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.415943 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d76595-42a6-4756-a5c5-7135fe150f1e-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.440798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.535274 5010 scope.go:117] "RemoveContainer" containerID="2c19193a99dd2b89cf342b5374e508ef59ea58fbd9c5b83248ac4024b880fe95" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622496 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622642 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.622860 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.623000 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") pod \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\" (UID: \"716318b2-6f04-4ff9-94c2-e107ebf51cb6\") " Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.625455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.626487 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs" (OuterVolumeSpecName: "logs") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.627781 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/716318b2-6f04-4ff9-94c2-e107ebf51cb6-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.638749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk" (OuterVolumeSpecName: "kube-api-access-8d4dk") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "kube-api-access-8d4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.645560 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.654557 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.657108 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cxfv2"] Feb 03 10:25:42 crc kubenswrapper[5010]: W0203 10:25:42.676384 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2608e076_ccd5_4d9b_9739_d2815655090e.slice/crio-8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e WatchSource:0}: Error finding container 8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e: Status 404 returned error can't find the container with id 8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.687561 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data" (OuterVolumeSpecName: "config-data") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: E0203 10:25:42.688658 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31521b0f_9e4f_4cfc_b0e8_e9e2bd2ca688.slice/crio-conmon-e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.699407 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts" (OuterVolumeSpecName: "scripts") pod "716318b2-6f04-4ff9-94c2-e107ebf51cb6" (UID: "716318b2-6f04-4ff9-94c2-e107ebf51cb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.732777 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733308 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/716318b2-6f04-4ff9-94c2-e107ebf51cb6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733320 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/716318b2-6f04-4ff9-94c2-e107ebf51cb6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.733352 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d4dk\" (UniqueName: \"kubernetes.io/projected/716318b2-6f04-4ff9-94c2-e107ebf51cb6-kube-api-access-8d4dk\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.815706 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.815836 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.817451 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} pod="openstack/horizon-7cdcd56868-k9h7g" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:25:42 crc kubenswrapper[5010]: I0203 10:25:42.817493 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" gracePeriod=30 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.022506 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.041197 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.129415 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.129562 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.130945 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} pod="openstack/horizon-6cc988db4-2mpfb" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.131003 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" containerID="cri-o://45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e" gracePeriod=30 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.323356 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344041 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5b4c5ff-x859r" event={"ID":"716318b2-6f04-4ff9-94c2-e107ebf51cb6","Type":"ContainerDied","Data":"2db889447ff0bc0e6f1ca25bbfa660b5dc01678a634757b799ec80a5560e67e4"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344117 5010 scope.go:117] "RemoveContainer" containerID="1e0c0b172a23175ded34e25aee553cea1577eb12ecd614b67b01f55633483ef4" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.344279 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5b4c5ff-x859r" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.367143 5010 generic.go:334] "Generic (PLEG): container finished" podID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerID="e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" exitCode=0 Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.368547 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded"} Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.446611 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.484114 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5b4c5ff-x859r"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.496361 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78c78c7889-r9575"] Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.613093 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-58c5b6f6cc-94dq7" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9696/\": dial tcp 10.217.0.150:9696: connect: connection refused" Feb 03 10:25:43 crc kubenswrapper[5010]: I0203 10:25:43.750922 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:25:44 crc kubenswrapper[5010]: I0203 10:25:44.554325 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" path="/var/lib/kubelet/pods/716318b2-6f04-4ff9-94c2-e107ebf51cb6/volumes" Feb 03 10:25:44 crc kubenswrapper[5010]: I0203 10:25:44.555046 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" path="/var/lib/kubelet/pods/73d76595-42a6-4756-a5c5-7135fe150f1e/volumes" Feb 03 10:25:46 crc kubenswrapper[5010]: I0203 10:25:46.172138 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:46 crc kubenswrapper[5010]: I0203 10:25:46.372931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:47 crc kubenswrapper[5010]: I0203 10:25:47.479017 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:49 crc kubenswrapper[5010]: I0203 10:25:49.288422 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:49 crc kubenswrapper[5010]: I0203 10:25:49.289006 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.283184 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f67746f54-2l6b9" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.366344 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.366865 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" containerID="cri-o://e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" gracePeriod=30 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.367522 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" containerID="cri-o://a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" gracePeriod=30 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.384580 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.384709 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.411443 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" probeResult="failure" output=< Feb 03 10:25:50 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:25:50 crc kubenswrapper[5010]: > Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.611574 5010 generic.go:334] "Generic (PLEG): container finished" podID="34b3477b-06e6-4914-a048-54af2ebc0250" containerID="e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" exitCode=143 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.611707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e"} Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.637250 5010 generic.go:334] "Generic (PLEG): container finished" podID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerID="f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" exitCode=0 Feb 03 10:25:50 crc kubenswrapper[5010]: I0203 10:25:50.637424 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955"} Feb 03 10:25:52 crc kubenswrapper[5010]: I0203 10:25:52.993009 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:53 crc kubenswrapper[5010]: I0203 10:25:53.087166 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:25:54 crc kubenswrapper[5010]: W0203 10:25:54.236437 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod872497ad_02bf_48fd_9ef7_c39591cd0cf3.slice/crio-c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156 WatchSource:0}: Error finding container c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156: Status 404 returned error can't find the container with id c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156 Feb 03 10:25:54 crc kubenswrapper[5010]: W0203 10:25:54.250607 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158ac65e_849e_4f85_a4b6_1ac4bde1a1ec.slice/crio-315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f WatchSource:0}: Error finding container 315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f: Status 404 returned error can't find the container with id 315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.475851 5010 scope.go:117] "RemoveContainer" containerID="5ec57a7e44cc0f82c124057f7268cf9e4686f96d4ca8ba657715ac39cccda8e4" Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.690616 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156"} Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.692065 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"315fe4b6a3bc1564af8b664feb3192140e44462ed97bc092ace115e8b833116f"} Feb 03 10:25:54 crc kubenswrapper[5010]: I0203 10:25:54.696794 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerStarted","Data":"2d51e4ddd011d0ec5a5a6ac940b6dc440f8c2ebbdfedfd082c8cf295f749780f"} Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.204301 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.205063 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rmrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4338eb03-3ad6-4d68-8d8a-a37694aff6d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 10:25:55 crc kubenswrapper[5010]: E0203 10:25:55.206310 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.242615 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347771 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.347917 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348124 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348186 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348402 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.348536 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") pod \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\" (UID: \"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688\") " Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.360827 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.365996 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67" (OuterVolumeSpecName: "kube-api-access-bnx67") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "kube-api-access-bnx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.434547 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.443436 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.453944 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454002 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnx67\" (UniqueName: \"kubernetes.io/projected/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-kube-api-access-bnx67\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454019 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.454036 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.456069 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.474279 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config" (OuterVolumeSpecName: "config") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.514989 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" (UID: "31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556794 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556977 5010 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.556990 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.732852 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" containerID="cri-o://d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" gracePeriod=30 Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.733051 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c5b6f6cc-94dq7" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.736125 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c5b6f6cc-94dq7" event={"ID":"31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688","Type":"ContainerDied","Data":"b27f611dc82e161f85b167c99dbce2d08eedaac7c3dd33e70725328f6c7d0a68"} Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.736320 5010 scope.go:117] "RemoveContainer" containerID="e0894a68073b3bd07b800e9f0879ea84ca668a89746cac6928280bad0a28dded" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.737439 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" containerID="cri-o://66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" gracePeriod=30 Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.755813 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.801578 5010 scope.go:117] "RemoveContainer" containerID="f95d5f955943f1d6179b138d89e148c3a26347690a24c1fd2737b1cfd76d3955" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.830177 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.858047 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58c5b6f6cc-94dq7"] Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.908725 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:33056->10.217.0.160:9311: read: connection reset by peer" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.909677 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-595698fff8-qzxdr" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:33054->10.217.0.160:9311: read: connection reset by peer" Feb 03 10:25:55 crc kubenswrapper[5010]: I0203 10:25:55.921407 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bc6c5cf68-f9b4p" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.014739 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.015040 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f744c8944-2zwzr" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" containerID="cri-o://68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" gracePeriod=30 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.015533 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f744c8944-2zwzr" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" containerID="cri-o://0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" gracePeriod=30 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.096170 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-675cc696d4-7wvtv" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.535962 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" path="/var/lib/kubelet/pods/31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688/volumes" Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.893815 5010 generic.go:334] "Generic (PLEG): container finished" podID="b88c8b02-54df-4761-acc8-c959005f4444" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" exitCode=0 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.894314 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.900684 5010 generic.go:334] "Generic (PLEG): container finished" podID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerID="66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" exitCode=2 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.900760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.907046 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" exitCode=143 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.907118 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.920239 5010 generic.go:334] "Generic (PLEG): container finished" podID="34b3477b-06e6-4914-a048-54af2ebc0250" containerID="a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" exitCode=0 Feb 03 10:25:56 crc kubenswrapper[5010]: I0203 10:25:56.920316 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9"} Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.054449 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"96438b6700091f1bab67b947cb73994cfe7b663ebf93f9a0880f7b75b38e3533"} Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.192877 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.287414 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.287923 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288100 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288136 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288166 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") pod \"34b3477b-06e6-4914-a048-54af2ebc0250\" (UID: \"34b3477b-06e6-4914-a048-54af2ebc0250\") " Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.288653 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs" (OuterVolumeSpecName: "logs") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.296020 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82" (OuterVolumeSpecName: "kube-api-access-8sz82") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "kube-api-access-8sz82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.297558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.327584 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.363776 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data" (OuterVolumeSpecName: "config-data") pod "34b3477b-06e6-4914-a048-54af2ebc0250" (UID: "34b3477b-06e6-4914-a048-54af2ebc0250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390677 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sz82\" (UniqueName: \"kubernetes.io/projected/34b3477b-06e6-4914-a048-54af2ebc0250-kube-api-access-8sz82\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390726 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b3477b-06e6-4914-a048-54af2ebc0250-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390741 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390753 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.390766 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b3477b-06e6-4914-a048-54af2ebc0250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.823283 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824017 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824034 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824050 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824055 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824071 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824077 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824094 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824099 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824115 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="init" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824121 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="init" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824128 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824135 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824154 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824159 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: E0203 10:25:57.824190 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824196 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824686 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824701 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824716 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824734 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="31521b0f-9e4f-4cfc-b0e8-e9e2bd2ca688" containerName="neutron-httpd" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824753 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" containerName="barbican-api" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824768 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="716318b2-6f04-4ff9-94c2-e107ebf51cb6" containerName="horizon-log" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.824784 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d76595-42a6-4756-a5c5-7135fe150f1e" containerName="dnsmasq-dns" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.825511 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.828311 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vzjq5" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.828519 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.829039 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833395 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833432 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833478 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.833590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.846103 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935116 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935140 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.935182 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.942956 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.944387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.945044 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c80632c0-72bc-461d-8e87-591d0ddbc1a8-openstack-config-secret\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:57 crc kubenswrapper[5010]: I0203 10:25:57.961364 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nq64\" (UniqueName: \"kubernetes.io/projected/c80632c0-72bc-461d-8e87-591d0ddbc1a8-kube-api-access-9nq64\") pod \"openstackclient\" (UID: \"c80632c0-72bc-461d-8e87-591d0ddbc1a8\") " pod="openstack/openstackclient" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.078115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerStarted","Data":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.079539 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.082592 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.115270 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" podStartSLOduration=18.115238337 podStartE2EDuration="18.115238337s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:58.10333097 +0000 UTC m=+1428.259307119" watchObservedRunningTime="2026-02-03 10:25:58.115238337 +0000 UTC m=+1428.271214476" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125453 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-595698fff8-qzxdr" event={"ID":"34b3477b-06e6-4914-a048-54af2ebc0250","Type":"ContainerDied","Data":"276b5ede8be32b2fcd5e4dea2a354a0412bc1e3d512cddd2da2cb8731f6a5abd"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125535 5010 scope.go:117] "RemoveContainer" containerID="a2e083c61dc7c9a5c3fac49824f7953d3fb85c8844f8a1f4ef14207348bfa1d9" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.125624 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-595698fff8-qzxdr" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.132373 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.171549 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78c78c7889-r9575" event={"ID":"158ac65e-849e-4f85-a4b6-1ac4bde1a1ec","Type":"ContainerStarted","Data":"6bd0c94c86ec6df8b63fc08a75b05e5f9fa252071bdab7ca204a7a1f441edd95"} Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.171945 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.173511 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.191711 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.220184 5010 scope.go:117] "RemoveContainer" containerID="e6b14e112fe4e444557f7a3aff312b5084d7db0d95368f7bd4f747a1a68cca9e" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.222580 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-595698fff8-qzxdr"] Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.224284 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78c78c7889-r9575" podStartSLOduration=17.2242652 podStartE2EDuration="17.2242652s" podCreationTimestamp="2026-02-03 10:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:58.212285654 +0000 UTC m=+1428.368261803" watchObservedRunningTime="2026-02-03 10:25:58.2242652 +0000 UTC m=+1428.380241339" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.536294 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b3477b-06e6-4914-a048-54af2ebc0250" path="/var/lib/kubelet/pods/34b3477b-06e6-4914-a048-54af2ebc0250/volumes" Feb 03 10:25:58 crc kubenswrapper[5010]: I0203 10:25:58.853073 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.189956 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerStarted","Data":"6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190145 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190141 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" containerID="cri-o://8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" gracePeriod=30 Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.190162 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" containerID="cri-o://6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" gracePeriod=30 Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.198259 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerStarted","Data":"9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.200472 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c80632c0-72bc-461d-8e87-591d0ddbc1a8","Type":"ContainerStarted","Data":"faae3cfb1a25e4d794ba91c5f847593fa8dd9af5786ff41a891cf150c042447d"} Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.229791 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=19.229762069 podStartE2EDuration="19.229762069s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:25:59.218979492 +0000 UTC m=+1429.374955621" watchObservedRunningTime="2026-02-03 10:25:59.229762069 +0000 UTC m=+1429.385738198" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.252444 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.4051715080000005 podStartE2EDuration="19.252401101s" podCreationTimestamp="2026-02-03 10:25:40 +0000 UTC" firstStartedPulling="2026-02-03 10:25:42.688199698 +0000 UTC m=+1412.844175827" lastFinishedPulling="2026-02-03 10:25:55.535429291 +0000 UTC m=+1425.691405420" observedRunningTime="2026-02-03 10:25:59.245953445 +0000 UTC m=+1429.401929584" watchObservedRunningTime="2026-02-03 10:25:59.252401101 +0000 UTC m=+1429.408377250" Feb 03 10:25:59 crc kubenswrapper[5010]: I0203 10:25:59.940934 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112077 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112340 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112393 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112506 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112563 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112657 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.112707 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") pod \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\" (UID: \"8d6356a1-c07c-4d04-8d48-7f13a822ddf5\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.115571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs" (OuterVolumeSpecName: "logs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.144916 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts" (OuterVolumeSpecName: "scripts") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.153592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4" (OuterVolumeSpecName: "kube-api-access-rj8c4") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "kube-api-access-rj8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220194 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220271 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8c4\" (UniqueName: \"kubernetes.io/projected/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-kube-api-access-rj8c4\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.220289 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.246389 5010 generic.go:334] "Generic (PLEG): container finished" podID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerID="d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.246546 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257139 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257300 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f744c8944-2zwzr" event={"ID":"8d6356a1-c07c-4d04-8d48-7f13a822ddf5","Type":"ContainerDied","Data":"089e9b9bfea0632f8dc13a626391ff9a317374bb6a62f576e2749c15e06ebc0d"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257415 5010 scope.go:117] "RemoveContainer" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.257810 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f744c8944-2zwzr" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.260506 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data" (OuterVolumeSpecName: "config-data") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.278563 5010 generic.go:334] "Generic (PLEG): container finished" podID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerID="6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" exitCode=0 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.278631 5010 generic.go:334] "Generic (PLEG): container finished" podID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerID="8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" exitCode=143 Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.279104 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.279233 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961"} Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.325093 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.329702 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.336342 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.364245 5010 scope.go:117] "RemoveContainer" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.403904 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.413372 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8d6356a1-c07c-4d04-8d48-7f13a822ddf5" (UID: "8d6356a1-c07c-4d04-8d48-7f13a822ddf5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.416700 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" probeResult="failure" output=< Feb 03 10:26:00 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:26:00 crc kubenswrapper[5010]: > Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.428323 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.430416 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.430435 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d6356a1-c07c-4d04-8d48-7f13a822ddf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532312 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532451 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532553 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532757 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.532882 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") pod \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\" (UID: \"4338eb03-3ad6-4d68-8d8a-a37694aff6d7\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.553941 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts" (OuterVolumeSpecName: "scripts") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.554552 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl" (OuterVolumeSpecName: "kube-api-access-4rmrl") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "kube-api-access-4rmrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.566454 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.567595 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.567851 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.588988 5010 scope.go:117] "RemoveContainer" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: E0203 10:26:00.598553 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": container with ID starting with 0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d not found: ID does not exist" containerID="0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.598630 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d"} err="failed to get container status \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": rpc error: code = NotFound desc = could not find container \"0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d\": container with ID starting with 0e84cb5a4b62670ae900f150d6236adc4968c099dd1c77f2f3b8f195543ff61d not found: ID does not exist" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.598668 5010 scope.go:117] "RemoveContainer" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: E0203 10:26:00.611072 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": container with ID starting with 68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8 not found: ID does not exist" containerID="68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.611157 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8"} err="failed to get container status \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": rpc error: code = NotFound desc = could not find container \"68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8\": container with ID starting with 68b79805974048ca3527e4cd57a6d3b61f940b55e09d99456ba6ad67453692d8 not found: ID does not exist" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.611735 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635928 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635980 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.635992 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636003 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636015 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rmrl\" (UniqueName: \"kubernetes.io/projected/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-kube-api-access-4rmrl\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.636977 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data" (OuterVolumeSpecName: "config-data") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.672716 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4338eb03-3ad6-4d68-8d8a-a37694aff6d7" (UID: "4338eb03-3ad6-4d68-8d8a-a37694aff6d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737404 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737601 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737679 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.737867 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738071 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738146 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") pod \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\" (UID: \"872497ad-02bf-48fd-9ef7-c39591cd0cf3\") " Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.738893 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs" (OuterVolumeSpecName: "logs") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739439 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739460 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4338eb03-3ad6-4d68-8d8a-a37694aff6d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739472 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/872497ad-02bf-48fd-9ef7-c39591cd0cf3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.739485 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/872497ad-02bf-48fd-9ef7-c39591cd0cf3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.758553 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.771431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts" (OuterVolumeSpecName: "scripts") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.771537 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j" (OuterVolumeSpecName: "kube-api-access-kvk2j") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "kube-api-access-kvk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.773492 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.785722 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7f744c8944-2zwzr"] Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.802344 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.823380 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.834099 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data" (OuterVolumeSpecName: "config-data") pod "872497ad-02bf-48fd-9ef7-c39591cd0cf3" (UID: "872497ad-02bf-48fd-9ef7-c39591cd0cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842221 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842691 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.842892 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.843077 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvk2j\" (UniqueName: \"kubernetes.io/projected/872497ad-02bf-48fd-9ef7-c39591cd0cf3-kube-api-access-kvk2j\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:00 crc kubenswrapper[5010]: I0203 10:26:00.843140 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/872497ad-02bf-48fd-9ef7-c39591cd0cf3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299574 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4338eb03-3ad6-4d68-8d8a-a37694aff6d7","Type":"ContainerDied","Data":"61a59197d7bdf8ea63d4d37b8f71bb48f78f9037194046295bca9711dd2a3194"} Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.299773 5010 scope.go:117] "RemoveContainer" containerID="66c74d715b2eacb41bf0f0e39922576ad416b3eb1d6ad6955ec6036858cd2f1d" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.312073 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"872497ad-02bf-48fd-9ef7-c39591cd0cf3","Type":"ContainerDied","Data":"c4597e5fb6f0efc59bba027f6c62619a6af54fb50a6a0e89101889e721398156"} Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.312107 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.336383 5010 scope.go:117] "RemoveContainer" containerID="d91d141426317acd31c21e9040c1e38df0008cc513ccacd6d4ecf8718788f6f7" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.431307 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.468317 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.480671 5010 scope.go:117] "RemoveContainer" containerID="6c176208520e0e4aa9ea320d1edfe8ab83a7718fb33505386deba54305a99180" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.493610 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494343 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494373 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494395 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494404 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494413 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494421 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494477 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494489 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494505 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494514 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: E0203 10:26:01.494539 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494548 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494813 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="sg-core" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494840 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494859 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-log" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494876 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" containerName="cinder-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494895 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" containerName="placement-api" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.494907 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" containerName="ceilometer-notification-agent" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.499539 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.513837 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.514121 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.514349 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.547811 5010 scope.go:117] "RemoveContainer" containerID="8f0a78e854f7929105346a11ec8aadfb8c983687a2549ad3dc08c8797e25a961" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.554265 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.603668 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.623494 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.637381 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.656357 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.667665 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.668015 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687754 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687889 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.687943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688064 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688288 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688410 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688456 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.688879 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.690712 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.719306 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795924 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.795967 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796047 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796128 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796191 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796266 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796312 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796391 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796430 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796483 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796517 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.796679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.800695 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e079d37-86a2-4be8-a16b-821095c780f0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.802569 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e079d37-86a2-4be8-a16b-821095c780f0-logs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.805684 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.806374 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-scripts\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.809315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-config-data\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.812527 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.814607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-public-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.823376 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gbbw\" (UniqueName: \"kubernetes.io/projected/7e079d37-86a2-4be8-a16b-821095c780f0-kube-api-access-7gbbw\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.826779 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e079d37-86a2-4be8-a16b-821095c780f0-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"7e079d37-86a2-4be8-a16b-821095c780f0\") " pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.847436 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900008 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900148 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900248 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900476 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.900547 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.902770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.903128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.906788 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.909289 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.912346 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.912960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:01 crc kubenswrapper[5010]: I0203 10:26:01.928752 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"ceilometer-0\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " pod="openstack/ceilometer-0" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.009813 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.448530 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.521777 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4338eb03-3ad6-4d68-8d8a-a37694aff6d7" path="/var/lib/kubelet/pods/4338eb03-3ad6-4d68-8d8a-a37694aff6d7/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.524520 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872497ad-02bf-48fd-9ef7-c39591cd0cf3" path="/var/lib/kubelet/pods/872497ad-02bf-48fd-9ef7-c39591cd0cf3/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.526241 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6356a1-c07c-4d04-8d48-7f13a822ddf5" path="/var/lib/kubelet/pods/8d6356a1-c07c-4d04-8d48-7f13a822ddf5/volumes" Feb 03 10:26:02 crc kubenswrapper[5010]: I0203 10:26:02.667763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:02 crc kubenswrapper[5010]: W0203 10:26:02.692096 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4909daad_030c_436e_acf5_2405a74d8180.slice/crio-9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010 WatchSource:0}: Error finding container 9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010: Status 404 returned error can't find the container with id 9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010 Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.416398 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"244db2c4c114273555c75c4cb333f4b696198bb58fac76777ecd9f7aee8092e2"} Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.417102 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"d326110758b57899bbb3402e1c571879c314d13619e61b251c6e77d898282b07"} Feb 03 10:26:03 crc kubenswrapper[5010]: I0203 10:26:03.418278 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.440735 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e079d37-86a2-4be8-a16b-821095c780f0","Type":"ContainerStarted","Data":"11c7a18a7c87397a4d54959b8f03343950c2f98b1dfd593b5d45bef5ac9adf81"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.441451 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.455760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4"} Feb 03 10:26:04 crc kubenswrapper[5010]: I0203 10:26:04.477176 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.477144578 podStartE2EDuration="3.477144578s" podCreationTimestamp="2026-02-03 10:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:04.465984611 +0000 UTC m=+1434.621960740" watchObservedRunningTime="2026-02-03 10:26:04.477144578 +0000 UTC m=+1434.633120707" Feb 03 10:26:05 crc kubenswrapper[5010]: I0203 10:26:05.526184 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.095465 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.139614 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.153975 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.230143 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.230527 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" containerID="cri-o://d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" gracePeriod=10 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.576955 5010 generic.go:334] "Generic (PLEG): container finished" podID="800c4356-da72-47c4-9a83-5eeceacc7211" containerID="d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" exitCode=0 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.577595 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614642 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" containerID="cri-o://02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" gracePeriod=30 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614764 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" containerID="cri-o://9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" gracePeriod=30 Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.614627 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630"} Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.929029 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.978402 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.978848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979371 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979505 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:06 crc kubenswrapper[5010]: I0203 10:26:06.979712 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") pod \"800c4356-da72-47c4-9a83-5eeceacc7211\" (UID: \"800c4356-da72-47c4-9a83-5eeceacc7211\") " Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.113648 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj" (OuterVolumeSpecName: "kube-api-access-54blj") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "kube-api-access-54blj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.174503 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.192521 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.209158 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config" (OuterVolumeSpecName: "config") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212422 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212479 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54blj\" (UniqueName: \"kubernetes.io/projected/800c4356-da72-47c4-9a83-5eeceacc7211-kube-api-access-54blj\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212500 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.212512 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.231028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.276017 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "800c4356-da72-47c4-9a83-5eeceacc7211" (UID: "800c4356-da72-47c4-9a83-5eeceacc7211"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.314701 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.314751 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/800c4356-da72-47c4-9a83-5eeceacc7211-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.543939 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:07 crc kubenswrapper[5010]: E0203 10:26:07.546081 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546122 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: E0203 10:26:07.546197 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="init" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546207 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="init" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.546558 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" containerName="dnsmasq-dns" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.554130 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.561609 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.561610 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.562199 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.586831 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.735403 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" event={"ID":"800c4356-da72-47c4-9a83-5eeceacc7211","Type":"ContainerDied","Data":"a39cc9b17b280be33534b557e14c9c1d9f99cb76acef07ae259bc5d74339aa49"} Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.735514 5010 scope.go:117] "RemoveContainer" containerID="d1764054e077cd4256f8f822597e57237fec354ad2e79a0451fb06420764c4a9" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.736235 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-v4m78" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.791182 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800188 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800339 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800367 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.800412 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.801721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803238 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-v4m78"] Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803636 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.803833 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.804104 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907005 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907093 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907153 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907353 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907432 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907462 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907486 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.907518 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.908780 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-log-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.909015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0d01af0-abb7-4cd1-92d7-d741182948f9-run-httpd\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.918629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-public-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.918876 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-config-data\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.919837 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-combined-ca-bundle\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.920113 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d01af0-abb7-4cd1-92d7-d741182948f9-internal-tls-certs\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.934580 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-etc-swift\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:07 crc kubenswrapper[5010]: I0203 10:26:07.939059 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhnnp\" (UniqueName: \"kubernetes.io/projected/a0d01af0-abb7-4cd1-92d7-d741182948f9-kube-api-access-qhnnp\") pod \"swift-proxy-7594db59b7-8cg94\" (UID: \"a0d01af0-abb7-4cd1-92d7-d741182948f9\") " pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.188684 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.520898 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800c4356-da72-47c4-9a83-5eeceacc7211" path="/var/lib/kubelet/pods/800c4356-da72-47c4-9a83-5eeceacc7211/volumes" Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.772014 5010 generic.go:334] "Generic (PLEG): container finished" podID="2608e076-ccd5-4d9b-9739-d2815655090e" containerID="9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" exitCode=0 Feb 03 10:26:08 crc kubenswrapper[5010]: I0203 10:26:08.772107 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f"} Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.354908 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.420836 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.661610 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.818347 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.819273 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" containerID="cri-o://55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" gracePeriod=30 Feb 03 10:26:09 crc kubenswrapper[5010]: I0203 10:26:09.819500 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" containerID="cri-o://9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" gracePeriod=30 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.018280 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.824909 5010 generic.go:334] "Generic (PLEG): container finished" podID="2608e076-ccd5-4d9b-9739-d2815655090e" containerID="02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" exitCode=0 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.825031 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321"} Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830134 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerID="55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" exitCode=143 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830490 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zcvn8" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" containerID="cri-o://8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" gracePeriod=2 Feb 03 10:26:10 crc kubenswrapper[5010]: I0203 10:26:10.830912 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8"} Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.682452 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.683417 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" containerID="cri-o://d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" gracePeriod=30 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.683652 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" containerID="cri-o://25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" gracePeriod=30 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.853069 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerID="d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" exitCode=143 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.853150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c"} Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.859418 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerID="8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" exitCode=0 Feb 03 10:26:11 crc kubenswrapper[5010]: I0203 10:26:11.859650 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e"} Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.437146 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78c78c7889-r9575" Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.566304 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.566729 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867995856-hbnv9" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" containerID="cri-o://13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" gracePeriod=30 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.567679 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-867995856-hbnv9" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" containerID="cri-o://61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" gracePeriod=30 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.908306 5010 generic.go:334] "Generic (PLEG): container finished" podID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerID="61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" exitCode=0 Feb 03 10:26:12 crc kubenswrapper[5010]: I0203 10:26:12.908384 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.946609 5010 generic.go:334] "Generic (PLEG): container finished" podID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerID="45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e" exitCode=137 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.947372 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerDied","Data":"45c56002ab101b0e77fc5934aa412e9d50c3e636af770ec4fe10888a673e7f7e"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.952191 5010 generic.go:334] "Generic (PLEG): container finished" podID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerID="9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" exitCode=0 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.952343 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9"} Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.964450 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" exitCode=137 Feb 03 10:26:13 crc kubenswrapper[5010]: I0203 10:26:13.964554 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0"} Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.386091 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.990529 5010 generic.go:334] "Generic (PLEG): container finished" podID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerID="25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" exitCode=0 Feb 03 10:26:15 crc kubenswrapper[5010]: I0203 10:26:15.990619 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83"} Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.223501 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.224480 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548h65dh564h668h596h87hffh65dh559h5chbch654h5fdh64dhffh94h75hbbh79h67bh5c5h8chf4h7ch5c9h5c9h5ch588h88hb9hch648q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nq64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(c80632c0-72bc-461d-8e87-591d0ddbc1a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:26:17 crc kubenswrapper[5010]: E0203 10:26:17.225789 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="c80632c0-72bc-461d-8e87-591d0ddbc1a8" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.433896 5010 scope.go:117] "RemoveContainer" containerID="e300605267e4f1076a4841165415138776a8cf13a2c4a8aef99e228176fdb314" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.698011 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897311 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.897397 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") pod \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\" (UID: \"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb\") " Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.900896 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities" (OuterVolumeSpecName: "utilities") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:17 crc kubenswrapper[5010]: I0203 10:26:17.911740 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g" (OuterVolumeSpecName: "kube-api-access-7xt6g") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "kube-api-access-7xt6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.004582 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xt6g\" (UniqueName: \"kubernetes.io/projected/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-kube-api-access-7xt6g\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.004623 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.029921 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" (UID: "a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.070583 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.072562 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcvn8" event={"ID":"a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb","Type":"ContainerDied","Data":"e35e681b91c0a3ba4c5e23b8c2426b406cc51121c6807c30d998f313924cb39e"} Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074727 5010 scope.go:117] "RemoveContainer" containerID="8340acedc9cfb7958b5ed0fad5a8c1555a0dabbb9f7998f97b867b7a3dd1d05e" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.074933 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcvn8" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.108353 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: E0203 10:26:18.133068 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="c80632c0-72bc-461d-8e87-591d0ddbc1a8" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.208744 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.228978 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zcvn8"] Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.245476 5010 scope.go:117] "RemoveContainer" containerID="74673c9131b0207ab10afaa2abb5a53e1aa2d49409325c6d66e87e77d3e886a6" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.512412 5010 scope.go:117] "RemoveContainer" containerID="fe0ab3a7555528e34ba8c05e18f87523a24b1e0ac976b994fc2479b4a244d8aa" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.519893 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" path="/var/lib/kubelet/pods/a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb/volumes" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.529981 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.583842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645620 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645711 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645822 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645875 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645896 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.645957 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646019 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646260 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646323 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") pod \"2608e076-ccd5-4d9b-9739-d2815655090e\" (UID: \"2608e076-ccd5-4d9b-9739-d2815655090e\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.646463 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") pod \"3ef87127-760d-4f81-8a78-a06d074c7ec3\" (UID: \"3ef87127-760d-4f81-8a78-a06d074c7ec3\") " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.652176 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.652794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs" (OuterVolumeSpecName: "logs") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.653794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.690790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts" (OuterVolumeSpecName: "scripts") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.693044 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.697853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.700376 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts" (OuterVolumeSpecName: "scripts") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.727749 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl" (OuterVolumeSpecName: "kube-api-access-jrcvl") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "kube-api-access-jrcvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.735566 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf" (OuterVolumeSpecName: "kube-api-access-v84sf") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "kube-api-access-v84sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751499 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751563 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84sf\" (UniqueName: \"kubernetes.io/projected/3ef87127-760d-4f81-8a78-a06d074c7ec3-kube-api-access-v84sf\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751577 5010 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751587 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751595 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef87127-760d-4f81-8a78-a06d074c7ec3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751610 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrcvl\" (UniqueName: \"kubernetes.io/projected/2608e076-ccd5-4d9b-9739-d2815655090e-kube-api-access-jrcvl\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751620 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751630 5010 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2608e076-ccd5-4d9b-9739-d2815655090e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.751666 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.829206 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.860175 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.976643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.980913 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.982901 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:18 crc kubenswrapper[5010]: I0203 10:26:18.982931 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.027118 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.047615 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084713 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084937 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.084991 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085105 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085250 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.085765 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.102971 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.114334 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs" (OuterVolumeSpecName: "logs") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.130122 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data" (OuterVolumeSpecName: "config-data") pod "3ef87127-760d-4f81-8a78-a06d074c7ec3" (UID: "3ef87127-760d-4f81-8a78-a06d074c7ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.131247 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data" (OuterVolumeSpecName: "config-data") pod "2608e076-ccd5-4d9b-9739-d2815655090e" (UID: "2608e076-ccd5-4d9b-9739-d2815655090e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.136672 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb" (OuterVolumeSpecName: "kube-api-access-8ddcb") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "kube-api-access-8ddcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.186713 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196608 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196651 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2608e076-ccd5-4d9b-9739-d2815655090e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196666 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddcb\" (UniqueName: \"kubernetes.io/projected/8d327288-f34e-4766-b3f6-b52b5c985d7d-kube-api-access-8ddcb\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196708 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196721 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d327288-f34e-4766-b3f6-b52b5c985d7d-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.196734 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef87127-760d-4f81-8a78-a06d074c7ec3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.209448 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts" (OuterVolumeSpecName: "scripts") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.221597 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.251785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef87127-760d-4f81-8a78-a06d074c7ec3","Type":"ContainerDied","Data":"6bd4ac18ae915fc96ca9ce387172eccabbebfdb18cd09371727e5b54df8c7288"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.251885 5010 scope.go:117] "RemoveContainer" containerID="9b0678012ddc709164e9aead0d03359efde01194b4a43605e01e402b58fd05e9" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.252051 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.289657 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8d327288-f34e-4766-b3f6-b52b5c985d7d","Type":"ContainerDied","Data":"1764b6a93e3f3ed5e01b4b46981d2b3555284f7ada6ea1b560610775c21c68d5"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.289798 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.313057 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.313100 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.345446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7594db59b7-8cg94"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.351051 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cc988db4-2mpfb" event={"ID":"2fedcc57-b16c-4177-a10e-f627269b4adb","Type":"ContainerStarted","Data":"6fbb0922a53d8d49edbd5cf6902f7fd678c5bafcb14a6637ba51e4911560e746"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.361354 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2608e076-ccd5-4d9b-9739-d2815655090e","Type":"ContainerDied","Data":"8fc43be7c4e38eab87c6ce057e45c890d78c06e59c1c3f94eb288aeb3ef2742e"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.361533 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.424201 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.428257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerStarted","Data":"204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429029 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" containerID="cri-o://4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429543 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429761 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" containerID="cri-o://204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.429919 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" containerID="cri-o://67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.430062 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" containerID="cri-o://1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" gracePeriod=30 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.475475 5010 generic.go:334] "Generic (PLEG): container finished" podID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerID="13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" exitCode=0 Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.475566 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.481392 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.516992 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.289353566 podStartE2EDuration="18.516964547s" podCreationTimestamp="2026-02-03 10:26:01 +0000 UTC" firstStartedPulling="2026-02-03 10:26:02.694767891 +0000 UTC m=+1432.850744020" lastFinishedPulling="2026-02-03 10:26:17.922378872 +0000 UTC m=+1448.078355001" observedRunningTime="2026-02-03 10:26:19.512816581 +0000 UTC m=+1449.668792730" watchObservedRunningTime="2026-02-03 10:26:19.516964547 +0000 UTC m=+1449.672940676" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.535260 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data" (OuterVolumeSpecName: "config-data") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.540502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") pod \"8d327288-f34e-4766-b3f6-b52b5c985d7d\" (UID: \"8d327288-f34e-4766-b3f6-b52b5c985d7d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.541547 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: W0203 10:26:19.545188 5010 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8d327288-f34e-4766-b3f6-b52b5c985d7d/volumes/kubernetes.io~secret/config-data Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.545241 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data" (OuterVolumeSpecName: "config-data") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.554629 5010 scope.go:117] "RemoveContainer" containerID="55bbb2cde20dfdcd53e2ce462c09a9714ec6a75aaad1416462255a0ed6efb0a8" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.585833 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8d327288-f34e-4766-b3f6-b52b5c985d7d" (UID: "8d327288-f34e-4766-b3f6-b52b5c985d7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.602409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.664967 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.665046 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d327288-f34e-4766-b3f6-b52b5c985d7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.708809 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.714664 5010 scope.go:117] "RemoveContainer" containerID="25ca14ceea3124e9ce28f484389b454fe015ddd37e62df01b7fb16db5f838f83" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766457 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766594 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766884 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.766930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") pod \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\" (UID: \"ec3f26b1-ee88-47b4-80d5-f281aa85c00d\") " Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.770372 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.782696 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.807797 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.825464 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.825927 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc" (OuterVolumeSpecName: "kube-api-access-mkvkc") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "kube-api-access-mkvkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.828325 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-content" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829549 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-content" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829586 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829596 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829619 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829631 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829641 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829648 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829663 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829669 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829681 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829687 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829697 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829704 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829737 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829745 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829755 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-utilities" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829763 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="extract-utilities" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829777 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829785 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: E0203 10:26:19.829803 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.829810 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830157 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830183 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="cinder-scheduler" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830197 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830237 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-log" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830257 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" containerName="probe" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830269 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-api" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830301 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" containerName="glance-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830317 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" containerName="neutron-httpd" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.830332 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f8fe2d-cf10-4cd4-bcb0-78a8b6467efb" containerName="registry-server" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.831401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.831787 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.849065 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.854519 5010 scope.go:117] "RemoveContainer" containerID="d96c848085855a1aab0bb15f4dcb25d155e8b02a76c2309a7e985e9edc63c08c" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.870421 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.881183 5010 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.881258 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvkc\" (UniqueName: \"kubernetes.io/projected/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-kube-api-access-mkvkc\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.949573 5010 scope.go:117] "RemoveContainer" containerID="9afac37147605919491f382bbfc27637b26db8fa47e1eb9f1d9454af8578414f" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.963163 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.968801 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.972045 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mtbjz" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.972397 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.974954 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.974981 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.979092 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config" (OuterVolumeSpecName: "config") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983102 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983205 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983303 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983342 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983381 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983400 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.983870 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:19 crc kubenswrapper[5010]: I0203 10:26:19.984029 5010 scope.go:117] "RemoveContainer" containerID="02b1b0db1e1d1490264d407bf569bd8135ae614f331340a7de745dc600379321" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.007270 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.035569 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.056645 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.078136 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.104165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107448 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107580 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107736 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107817 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107861 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107895 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.107933 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108122 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108290 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.108744 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109062 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109273 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109517 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.109653 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.110891 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.113915 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125184 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125406 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.125832 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-scripts\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.126749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.128192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.133924 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.135392 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.143073 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.144805 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dxll\" (UniqueName: \"kubernetes.io/projected/63ed8c2d-6ac3-4a61-8e4c-1601efeca708-kube-api-access-9dxll\") pod \"cinder-scheduler-0\" (UID: \"63ed8c2d-6ac3-4a61-8e4c-1601efeca708\") " pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.168180 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ec3f26b1-ee88-47b4-80d5-f281aa85c00d" (UID: "ec3f26b1-ee88-47b4-80d5-f281aa85c00d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213231 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213627 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213737 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213881 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.213984 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214086 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.214409 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216615 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216693 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216854 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.216918 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217082 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217203 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.217431 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.218399 5010 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3f26b1-ee88-47b4-80d5-f281aa85c00d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223129 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-config-data\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223247 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.223917 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769cccf-496c-4370-8e08-e1f156fecd77-logs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.228809 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-scripts\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.229280 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.233045 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769cccf-496c-4370-8e08-e1f156fecd77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.241664 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.242632 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6fd\" (UniqueName: \"kubernetes.io/projected/1769cccf-496c-4370-8e08-e1f156fecd77-kube-api-access-db6fd\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.282350 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1769cccf-496c-4370-8e08-e1f156fecd77\") " pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.320383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.320716 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321000 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321371 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321534 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321638 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321787 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.321878 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.323081 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-logs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.324986 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.327211 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.329471 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.348014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.354879 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.360107 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8d6z\" (UniqueName: \"kubernetes.io/projected/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-kube-api-access-m8d6z\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.360945 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.417896 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a\") " pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.467081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.595063 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2608e076-ccd5-4d9b-9739-d2815655090e" path="/var/lib/kubelet/pods/2608e076-ccd5-4d9b-9739-d2815655090e/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.596173 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" path="/var/lib/kubelet/pods/3ef87127-760d-4f81-8a78-a06d074c7ec3/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.602305 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d327288-f34e-4766-b3f6-b52b5c985d7d" path="/var/lib/kubelet/pods/8d327288-f34e-4766-b3f6-b52b5c985d7d/volumes" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.619252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"d650c86cc6764932add9e9703768e8c9d50ba847abea4a0b062a2d92d6a9e49d"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.619370 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"3bb214043f133be975e271904ac4313246c72c1065478f5fc497fe7508412cbf"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.690640 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" exitCode=2 Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691190 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" exitCode=0 Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.691367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.703749 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-867995856-hbnv9" event={"ID":"ec3f26b1-ee88-47b4-80d5-f281aa85c00d","Type":"ContainerDied","Data":"5d57a17f6b627eededa0a21aa0ef2051ab13fadb63e9a5ef111d5cb1f8d96193"} Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.703851 5010 scope.go:117] "RemoveContainer" containerID="61b9f09360bad3b65b22af3bd28bc767427a951a1f75a5674af55a31458394a9" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.704118 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-867995856-hbnv9" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.774659 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.776817 5010 scope.go:117] "RemoveContainer" containerID="13a99ef6826ee2239f9e033be19a6f4c730512b38fb4cc1caa87b9ad6b5789db" Feb 03 10:26:20 crc kubenswrapper[5010]: I0203 10:26:20.810753 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-867995856-hbnv9"] Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.012592 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.472438 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 10:26:21 crc kubenswrapper[5010]: W0203 10:26:21.485605 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9df7182f_e3e9_40bf_bfb2_b2e9ef64f90a.slice/crio-ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc WatchSource:0}: Error finding container ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc: Status 404 returned error can't find the container with id ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.720257 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"ee33cff183c92ef1b70e5e208d817c51e9ad6b2607a4d19849d3d342e041a4cc"} Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.724280 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"5094e83e6ce9d9199193fbd6c30a37df43729ffbc7fdc7fa8d97620d280876e4"} Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.738978 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" exitCode=0 Feb 03 10:26:21 crc kubenswrapper[5010]: I0203 10:26:21.739048 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.219705 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.236041 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.284740 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.416867 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.419565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432082 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432634 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.432826 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.600852 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.600960 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.601174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.605147 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.610054 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.675278 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3f26b1-ee88-47b4-80d5-f281aa85c00d" path="/var/lib/kubelet/pods/ec3f26b1-ee88-47b4-80d5-f281aa85c00d/volumes" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676870 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676907 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.676924 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.681736 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.709519 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.713315 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"nova-api-db-create-qnsrk\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.746199 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.746645 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.750120 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.777293 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"nova-api-d58b-account-create-update-p69h5\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.786680 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.792147 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.797114 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.810416 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.811833 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.825015 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.837891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.845738 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.855012 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.860180 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.872709 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"608458075b6b49913240654df17472092c0c9c4149bbc8fea5e0d935492ce955"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.878455 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.886645 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"5fcc8ee5ae1d4704603c864a8158576315906c583ebe1fb70d9c31068cab8a7d"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.887419 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.921062 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7594db59b7-8cg94" event={"ID":"a0d01af0-abb7-4cd1-92d7-d741182948f9","Type":"ContainerStarted","Data":"df73bc00fb7fe066fcb6d82f9ec7d7342ce26208e27603844392cda655acb073"} Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.922124 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.922191 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.926910 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.930197 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.933801 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.938977 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964449 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964604 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964628 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964683 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964797 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.964850 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.968614 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:22 crc kubenswrapper[5010]: I0203 10:26:22.979873 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7594db59b7-8cg94" podStartSLOduration=15.979838824 podStartE2EDuration="15.979838824s" podCreationTimestamp="2026-02-03 10:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:22.952144573 +0000 UTC m=+1453.108120722" watchObservedRunningTime="2026-02-03 10:26:22.979838824 +0000 UTC m=+1453.135814953" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.016825 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"nova-cell0-db-create-dq6kw\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.079229 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080162 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080536 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080692 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.080937 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.083429 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.084430 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.111193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"nova-cell1-db-create-fztcs\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.114158 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"nova-cell0-46aa-account-create-update-5gs9h\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.124546 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.125403 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.196300 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.196906 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.203229 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.230516 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.243453 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"nova-cell1-c6bf-account-create-update-9xrwr\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.251139 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.391075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.422867 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:23 crc kubenswrapper[5010]: I0203 10:26:23.800998 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:26:23 crc kubenswrapper[5010]: W0203 10:26:23.847784 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26fff59b_fc6c_46b2_9cb6_9ad352b4e39c.slice/crio-0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1 WatchSource:0}: Error finding container 0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1: Status 404 returned error can't find the container with id 0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1 Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.000675 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerStarted","Data":"0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1"} Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.022655 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"bb831f8968e76e6ea1b5107a74598ccfd811b313307026199e8086e291b6b925"} Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.090527 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7594db59b7-8cg94" podUID="a0d01af0-abb7-4cd1-92d7-d741182948f9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.487575 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:26:24 crc kubenswrapper[5010]: I0203 10:26:24.696809 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:26:24 crc kubenswrapper[5010]: W0203 10:26:24.745282 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod307672c5_ae66_4af2_bbbb_1a59c58ee4b2.slice/crio-eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09 WatchSource:0}: Error finding container eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09: Status 404 returned error can't find the container with id eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09 Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.086874 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerStarted","Data":"d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.102419 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"63ed8c2d-6ac3-4a61-8e4c-1601efeca708","Type":"ContainerStarted","Data":"ebf13c00c4aecf2f4b7ce83a689427d37c49b125515ab68b9a2ecbbc3500216e"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.140691 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerStarted","Data":"a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.159603 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerStarted","Data":"eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09"} Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.171691 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.171640793 podStartE2EDuration="6.171640793s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:25.14272235 +0000 UTC m=+1455.298698479" watchObservedRunningTime="2026-02-03 10:26:25.171640793 +0000 UTC m=+1455.327616932" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.217884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qnsrk" podStartSLOduration=3.217852809 podStartE2EDuration="3.217852809s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:25.177080232 +0000 UTC m=+1455.333056361" watchObservedRunningTime="2026-02-03 10:26:25.217852809 +0000 UTC m=+1455.373828938" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.243441 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.292175 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.305652 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:26:25 crc kubenswrapper[5010]: I0203 10:26:25.381976 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.208260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerStarted","Data":"48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.209241 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerStarted","Data":"e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.241060 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerStarted","Data":"481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.244884 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" podStartSLOduration=4.24484745 podStartE2EDuration="4.24484745s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.241017162 +0000 UTC m=+1456.396993291" watchObservedRunningTime="2026-02-03 10:26:26.24484745 +0000 UTC m=+1456.400823579" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.279381 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerStarted","Data":"277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.279454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerStarted","Data":"49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.296058 5010 generic.go:334] "Generic (PLEG): container finished" podID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerID="a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63" exitCode=0 Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.296160 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerDied","Data":"a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.314743 5010 generic.go:334] "Generic (PLEG): container finished" podID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerID="4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796" exitCode=0 Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.314887 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerDied","Data":"4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.322817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerStarted","Data":"279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.322892 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerStarted","Data":"0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.327311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"2d8db287b9e462878af4470363facdc91935bbf327f4082fd0e4728ee3cb2035"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.340758 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-fztcs" podStartSLOduration=4.340706651 podStartE2EDuration="4.340706651s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.310476375 +0000 UTC m=+1456.466452494" watchObservedRunningTime="2026-02-03 10:26:26.340706651 +0000 UTC m=+1456.496682780" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.342246 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a","Type":"ContainerStarted","Data":"b8cb775a7d77ea587bacf09d59466092c4bfc800e46073c399dc94d5fa42b79e"} Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.413727 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" podStartSLOduration=4.413693455 podStartE2EDuration="4.413693455s" podCreationTimestamp="2026-02-03 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.365919599 +0000 UTC m=+1456.521895748" watchObservedRunningTime="2026-02-03 10:26:26.413693455 +0000 UTC m=+1456.569669584" Feb 03 10:26:26 crc kubenswrapper[5010]: I0203 10:26:26.537126 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.537084694 podStartE2EDuration="7.537084694s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:26.448480899 +0000 UTC m=+1456.604457038" watchObservedRunningTime="2026-02-03 10:26:26.537084694 +0000 UTC m=+1456.693060833" Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.354371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1769cccf-496c-4370-8e08-e1f156fecd77","Type":"ContainerStarted","Data":"246c1e7b3ea1f8cbcc196edcc1361a663ce9cf422d064143d09c6bc10719e9b2"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.358847 5010 generic.go:334] "Generic (PLEG): container finished" podID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerID="48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.358970 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerDied","Data":"48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.364900 5010 generic.go:334] "Generic (PLEG): container finished" podID="122231ac-5000-44d7-a524-2df85da0abd4" containerID="481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.365012 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerDied","Data":"481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.367243 5010 generic.go:334] "Generic (PLEG): container finished" podID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerID="277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.367335 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerDied","Data":"277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.369878 5010 generic.go:334] "Generic (PLEG): container finished" podID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerID="279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3" exitCode=0 Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.369988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerDied","Data":"279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3"} Feb 03 10:26:27 crc kubenswrapper[5010]: I0203 10:26:27.392472 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.392436666 podStartE2EDuration="8.392436666s" podCreationTimestamp="2026-02-03 10:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:26:27.389752347 +0000 UTC m=+1457.545728496" watchObservedRunningTime="2026-02-03 10:26:27.392436666 +0000 UTC m=+1457.548412815" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.044701 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.128439 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") pod \"122231ac-5000-44d7-a524-2df85da0abd4\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.128644 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") pod \"122231ac-5000-44d7-a524-2df85da0abd4\" (UID: \"122231ac-5000-44d7-a524-2df85da0abd4\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.130975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "122231ac-5000-44d7-a524-2df85da0abd4" (UID: "122231ac-5000-44d7-a524-2df85da0abd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.138799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k" (OuterVolumeSpecName: "kube-api-access-r8h8k") pod "122231ac-5000-44d7-a524-2df85da0abd4" (UID: "122231ac-5000-44d7-a524-2df85da0abd4"). InnerVolumeSpecName "kube-api-access-r8h8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.202603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.227977 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7594db59b7-8cg94" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.235190 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/122231ac-5000-44d7-a524-2df85da0abd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.235250 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8h8k\" (UniqueName: \"kubernetes.io/projected/122231ac-5000-44d7-a524-2df85da0abd4-kube-api-access-r8h8k\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.262162 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.279764 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337493 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") pod \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") pod \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337695 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") pod \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\" (UID: \"307672c5-ae66-4af2-bbbb-1a59c58ee4b2\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.337730 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") pod \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\" (UID: \"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.342460 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "307672c5-ae66-4af2-bbbb-1a59c58ee4b2" (UID: "307672c5-ae66-4af2-bbbb-1a59c58ee4b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.349335 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7" (OuterVolumeSpecName: "kube-api-access-j4nh7") pod "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" (UID: "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c"). InnerVolumeSpecName "kube-api-access-j4nh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.350648 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" (UID: "26fff59b-fc6c-46b2-9cb6-9ad352b4e39c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.352637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r" (OuterVolumeSpecName: "kube-api-access-z7s8r") pod "307672c5-ae66-4af2-bbbb-1a59c58ee4b2" (UID: "307672c5-ae66-4af2-bbbb-1a59c58ee4b2"). InnerVolumeSpecName "kube-api-access-z7s8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393272 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d58b-account-create-update-p69h5" event={"ID":"122231ac-5000-44d7-a524-2df85da0abd4","Type":"ContainerDied","Data":"d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393348 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c37356251ceab45787255a0bf11e8d0fb8d799a100064f75c795e909a8b233" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.393442 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d58b-account-create-update-p69h5" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409285 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qnsrk" event={"ID":"26fff59b-fc6c-46b2-9cb6-9ad352b4e39c","Type":"ContainerDied","Data":"0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409899 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba4d23b4eba6d6e0c64a591720369c91d163b40cc0f86e50be5facff204aee1" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.409324 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qnsrk" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.431034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dq6kw" event={"ID":"307672c5-ae66-4af2-bbbb-1a59c58ee4b2","Type":"ContainerDied","Data":"eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09"} Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.431506 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dq6kw" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.433916 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee96a285511460543183b4fa28b6245bf21bbbd910269f1be813ccaf8a85b09" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441166 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4nh7\" (UniqueName: \"kubernetes.io/projected/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-kube-api-access-j4nh7\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441328 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441346 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7s8r\" (UniqueName: \"kubernetes.io/projected/307672c5-ae66-4af2-bbbb-1a59c58ee4b2-kube-api-access-z7s8r\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.441357 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.850953 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.897468 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") pod \"cab88b93-9009-49d9-8967-dc8f2b9a7244\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.900207 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") pod \"cab88b93-9009-49d9-8967-dc8f2b9a7244\" (UID: \"cab88b93-9009-49d9-8967-dc8f2b9a7244\") " Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.903644 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cab88b93-9009-49d9-8967-dc8f2b9a7244" (UID: "cab88b93-9009-49d9-8967-dc8f2b9a7244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:28 crc kubenswrapper[5010]: I0203 10:26:28.920860 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x" (OuterVolumeSpecName: "kube-api-access-chd7x") pod "cab88b93-9009-49d9-8967-dc8f2b9a7244" (UID: "cab88b93-9009-49d9-8967-dc8f2b9a7244"). InnerVolumeSpecName "kube-api-access-chd7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.005312 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chd7x\" (UniqueName: \"kubernetes.io/projected/cab88b93-9009-49d9-8967-dc8f2b9a7244-kube-api-access-chd7x\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.005380 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab88b93-9009-49d9-8967-dc8f2b9a7244-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.177065 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.186341 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313357 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") pod \"19aa5f54-6733-454e-a1cf-92ba62fc4068\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313591 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") pod \"6fac5d19-4577-4190-b626-83d0b42fd46d\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313804 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") pod \"19aa5f54-6733-454e-a1cf-92ba62fc4068\" (UID: \"19aa5f54-6733-454e-a1cf-92ba62fc4068\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.313931 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") pod \"6fac5d19-4577-4190-b626-83d0b42fd46d\" (UID: \"6fac5d19-4577-4190-b626-83d0b42fd46d\") " Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.314196 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19aa5f54-6733-454e-a1cf-92ba62fc4068" (UID: "19aa5f54-6733-454e-a1cf-92ba62fc4068"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.314634 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19aa5f54-6733-454e-a1cf-92ba62fc4068-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.315098 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fac5d19-4577-4190-b626-83d0b42fd46d" (UID: "6fac5d19-4577-4190-b626-83d0b42fd46d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.322880 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7" (OuterVolumeSpecName: "kube-api-access-6dcr7") pod "19aa5f54-6733-454e-a1cf-92ba62fc4068" (UID: "19aa5f54-6733-454e-a1cf-92ba62fc4068"). InnerVolumeSpecName "kube-api-access-6dcr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.329519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c" (OuterVolumeSpecName: "kube-api-access-khj9c") pod "6fac5d19-4577-4190-b626-83d0b42fd46d" (UID: "6fac5d19-4577-4190-b626-83d0b42fd46d"). InnerVolumeSpecName "kube-api-access-khj9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417043 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcr7\" (UniqueName: \"kubernetes.io/projected/19aa5f54-6733-454e-a1cf-92ba62fc4068-kube-api-access-6dcr7\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417114 5010 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fac5d19-4577-4190-b626-83d0b42fd46d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.417125 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khj9c\" (UniqueName: \"kubernetes.io/projected/6fac5d19-4577-4190-b626-83d0b42fd46d-kube-api-access-khj9c\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452345 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fztcs" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452354 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fztcs" event={"ID":"19aa5f54-6733-454e-a1cf-92ba62fc4068","Type":"ContainerDied","Data":"49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.452443 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49558af84c27fd529f7f93b79b04100ed86805e41a3a8207cb74e5891388348f" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.457942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" event={"ID":"cab88b93-9009-49d9-8967-dc8f2b9a7244","Type":"ContainerDied","Data":"0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.458443 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2358b435e4d2a2f42ee4e3e8fcbdc8cf21cbb007e9e788e0e3ad868a511b80" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.458057 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c6bf-account-create-update-9xrwr" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.463906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" event={"ID":"6fac5d19-4577-4190-b626-83d0b42fd46d","Type":"ContainerDied","Data":"e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f"} Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.463969 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e83b9fa88b18c5bf71a71896def34f7759be48f196cbee117b1b6d7fc1256f" Feb 03 10:26:29 crc kubenswrapper[5010]: I0203 10:26:29.464045 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46aa-account-create-update-5gs9h" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.327891 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.328567 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.399034 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.404911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.467463 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.467554 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.496773 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c80632c0-72bc-461d-8e87-591d0ddbc1a8","Type":"ContainerStarted","Data":"17f0b34ebc4ff0a6df652cf57cfa5f25ce04e81690b49ed17ee73385232e443a"} Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.498718 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.498755 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.540296 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.540898 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.549191 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.373804076 podStartE2EDuration="33.549160261s" podCreationTimestamp="2026-02-03 10:25:57 +0000 UTC" firstStartedPulling="2026-02-03 10:25:58.851419434 +0000 UTC m=+1429.007395563" lastFinishedPulling="2026-02-03 10:26:30.026775619 +0000 UTC m=+1460.182751748" observedRunningTime="2026-02-03 10:26:30.523546554 +0000 UTC m=+1460.679522683" watchObservedRunningTime="2026-02-03 10:26:30.549160261 +0000 UTC m=+1460.705136401" Feb 03 10:26:30 crc kubenswrapper[5010]: I0203 10:26:30.616797 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 10:26:31 crc kubenswrapper[5010]: I0203 10:26:31.508590 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:31 crc kubenswrapper[5010]: I0203 10:26:31.509839 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.022868 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.805572 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962063 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962713 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962730 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962743 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962750 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962761 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962768 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962793 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962802 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962828 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962835 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: E0203 10:26:32.962863 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.962873 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963096 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963110 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963137 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="122231ac-5000-44d7-a524-2df85da0abd4" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963148 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963165 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" containerName="mariadb-account-create-update" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.963181 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" containerName="mariadb-database-create" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.964115 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.967185 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kdpzn" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.967517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 10:26:32 crc kubenswrapper[5010]: I0203 10:26:32.990776 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.049716 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.126103 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132095 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132434 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132504 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.132531 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235050 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235118 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.235385 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.251529 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.260169 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.260495 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.270331 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"nova-cell0-conductor-db-sync-gd6dz\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.295399 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.541180 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:33 crc kubenswrapper[5010]: I0203 10:26:33.542117 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.159543 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.574479 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerStarted","Data":"0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36"} Feb 03 10:26:34 crc kubenswrapper[5010]: I0203 10:26:34.666332 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:26:35 crc kubenswrapper[5010]: I0203 10:26:35.901378 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:35 crc kubenswrapper[5010]: I0203 10:26:35.902064 5010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 10:26:36 crc kubenswrapper[5010]: I0203 10:26:36.085649 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 10:26:37 crc kubenswrapper[5010]: I0203 10:26:37.697451 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 10:26:42 crc kubenswrapper[5010]: I0203 10:26:42.807235 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:43 crc kubenswrapper[5010]: I0203 10:26:43.134442 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cc988db4-2mpfb" podUID="2fedcc57-b16c-4177-a10e-f627269b4adb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.506143 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.506166 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3ef87127-760d-4f81-8a78-a06d074c7ec3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 10:26:48 crc kubenswrapper[5010]: I0203 10:26:48.893580 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerStarted","Data":"529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.916456 5010 generic.go:334] "Generic (PLEG): container finished" podID="4909daad-030c-436e-acf5-2405a74d8180" containerID="204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" exitCode=137 Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.916549 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.917048 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4909daad-030c-436e-acf5-2405a74d8180","Type":"ContainerDied","Data":"9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010"} Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.917065 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf689dea05fc0f3ed74b115d13e839aab5eee31fcc1462d9040ce5ddfa67010" Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.957812 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:49 crc kubenswrapper[5010]: I0203 10:26:49.995047 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" podStartSLOduration=4.413292846 podStartE2EDuration="17.995008676s" podCreationTimestamp="2026-02-03 10:26:32 +0000 UTC" firstStartedPulling="2026-02-03 10:26:34.11373719 +0000 UTC m=+1464.269713319" lastFinishedPulling="2026-02-03 10:26:47.69545301 +0000 UTC m=+1477.851429149" observedRunningTime="2026-02-03 10:26:48.923872432 +0000 UTC m=+1479.079848581" watchObservedRunningTime="2026-02-03 10:26:49.995008676 +0000 UTC m=+1480.150984805" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.130922 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131058 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131207 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131416 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131491 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131698 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.131754 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.132099 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.132729 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.133701 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.140984 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n" (OuterVolumeSpecName: "kube-api-access-4vs4n") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "kube-api-access-4vs4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.141139 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts" (OuterVolumeSpecName: "scripts") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.174138 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.236401 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.237432 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") pod \"4909daad-030c-436e-acf5-2405a74d8180\" (UID: \"4909daad-030c-436e-acf5-2405a74d8180\") " Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238856 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238902 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238923 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vs4n\" (UniqueName: \"kubernetes.io/projected/4909daad-030c-436e-acf5-2405a74d8180-kube-api-access-4vs4n\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.238941 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4909daad-030c-436e-acf5-2405a74d8180-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: W0203 10:26:50.239088 5010 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4909daad-030c-436e-acf5-2405a74d8180/volumes/kubernetes.io~secret/combined-ca-bundle Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.239156 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.283641 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data" (OuterVolumeSpecName: "config-data") pod "4909daad-030c-436e-acf5-2405a74d8180" (UID: "4909daad-030c-436e-acf5-2405a74d8180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.343527 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.343583 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4909daad-030c-436e-acf5-2405a74d8180-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.930012 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.967569 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:50 crc kubenswrapper[5010]: I0203 10:26:50.978017 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.008945 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009757 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009792 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009817 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009828 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009843 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009853 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: E0203 10:26:51.009865 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.009874 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010154 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-central-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010189 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="proxy-httpd" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010202 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="ceilometer-notification-agent" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.010237 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4909daad-030c-436e-acf5-2405a74d8180" containerName="sg-core" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.013062 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.017661 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.018019 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.053474 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164622 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164691 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164742 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164760 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164795 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164817 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.164912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267154 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267193 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267230 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267262 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267285 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.267372 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.268030 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.268484 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.279902 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.285696 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.285783 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.293136 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.297523 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"ceilometer-0\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.361358 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:26:51 crc kubenswrapper[5010]: I0203 10:26:51.960042 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.517411 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4909daad-030c-436e-acf5-2405a74d8180" path="/var/lib/kubelet/pods/4909daad-030c-436e-acf5-2405a74d8180/volumes" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.805071 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.805206 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.806646 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} pod="openstack/horizon-7cdcd56868-k9h7g" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 10:26:52 crc kubenswrapper[5010]: I0203 10:26:52.806710 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" gracePeriod=30 Feb 03 10:26:53 crc kubenswrapper[5010]: I0203 10:26:53.006042 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"fa75ed4d16d9d22ec602a49ea9072fdf61887d1412cdd02f5aaf820516fa7e39"} Feb 03 10:26:54 crc kubenswrapper[5010]: I0203 10:26:54.020468 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} Feb 03 10:26:54 crc kubenswrapper[5010]: I0203 10:26:54.021023 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} Feb 03 10:26:55 crc kubenswrapper[5010]: I0203 10:26:55.051124 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} Feb 03 10:26:56 crc kubenswrapper[5010]: I0203 10:26:56.661344 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.078857 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerStarted","Data":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.080931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:26:57 crc kubenswrapper[5010]: I0203 10:26:57.122763 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.546346967 podStartE2EDuration="7.122730735s" podCreationTimestamp="2026-02-03 10:26:50 +0000 UTC" firstStartedPulling="2026-02-03 10:26:52.007586193 +0000 UTC m=+1482.163562332" lastFinishedPulling="2026-02-03 10:26:56.583969971 +0000 UTC m=+1486.739946100" observedRunningTime="2026-02-03 10:26:57.11903733 +0000 UTC m=+1487.275013479" watchObservedRunningTime="2026-02-03 10:26:57.122730735 +0000 UTC m=+1487.278706854" Feb 03 10:26:59 crc kubenswrapper[5010]: I0203 10:26:59.075103 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cc988db4-2mpfb" Feb 03 10:26:59 crc kubenswrapper[5010]: I0203 10:26:59.201121 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.317875 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318565 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" containerID="cri-o://75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318607 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" containerID="cri-o://2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318675 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" containerID="cri-o://10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" gracePeriod=30 Feb 03 10:27:02 crc kubenswrapper[5010]: I0203 10:27:02.318712 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" containerID="cri-o://07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" gracePeriod=30 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.202985 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" exitCode=0 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203038 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" exitCode=2 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203048 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" exitCode=0 Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203078 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} Feb 03 10:27:03 crc kubenswrapper[5010]: I0203 10:27:03.203186 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} Feb 03 10:27:07 crc kubenswrapper[5010]: I0203 10:27:07.258751 5010 generic.go:334] "Generic (PLEG): container finished" podID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerID="529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033" exitCode=0 Feb 03 10:27:07 crc kubenswrapper[5010]: I0203 10:27:07.258838 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerDied","Data":"529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.252962 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281526 5010 generic.go:334] "Generic (PLEG): container finished" podID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" exitCode=0 Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281617 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281717 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a","Type":"ContainerDied","Data":"fa75ed4d16d9d22ec602a49ea9072fdf61887d1412cdd02f5aaf820516fa7e39"} Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281753 5010 scope.go:117] "RemoveContainer" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.281749 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.318593 5010 scope.go:117] "RemoveContainer" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.351226 5010 scope.go:117] "RemoveContainer" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360138 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360262 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360310 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360653 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") pod \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\" (UID: \"c1e44dd4-d920-49dc-8581-5fcfcbb1db9a\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360866 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.360894 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.361295 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.361318 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.368743 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts" (OuterVolumeSpecName: "scripts") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.373616 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz" (OuterVolumeSpecName: "kube-api-access-vldqz") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "kube-api-access-vldqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.386601 5010 scope.go:117] "RemoveContainer" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.444243 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463579 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vldqz\" (UniqueName: \"kubernetes.io/projected/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-kube-api-access-vldqz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463616 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.463630 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.486393 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.511849 5010 scope.go:117] "RemoveContainer" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.516723 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": container with ID starting with 10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4 not found: ID does not exist" containerID="10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.516808 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4"} err="failed to get container status \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": rpc error: code = NotFound desc = could not find container \"10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4\": container with ID starting with 10c62cb6c59fe659f2b885abf60241773d365f90b3858dcca005a51bc08972b4 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.516870 5010 scope.go:117] "RemoveContainer" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.519685 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": container with ID starting with 2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988 not found: ID does not exist" containerID="2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.519779 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988"} err="failed to get container status \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": rpc error: code = NotFound desc = could not find container \"2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988\": container with ID starting with 2cae2b18cbfe4ebff3fd1a15b61bb3c6398c3ca0cfd56f5f8f1441515e7cc988 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.519824 5010 scope.go:117] "RemoveContainer" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.520407 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": container with ID starting with 07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35 not found: ID does not exist" containerID="07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520430 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35"} err="failed to get container status \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": rpc error: code = NotFound desc = could not find container \"07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35\": container with ID starting with 07983070855d658ead93cc83f269fd616e1a6443e24b6d865126a4276cd95a35 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520443 5010 scope.go:117] "RemoveContainer" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.520767 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": container with ID starting with 75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86 not found: ID does not exist" containerID="75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.520794 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86"} err="failed to get container status \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": rpc error: code = NotFound desc = could not find container \"75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86\": container with ID starting with 75304425a8438e2c18b701a6caa81896d379863b199d71812f50391ec23f2c86 not found: ID does not exist" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.525432 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data" (OuterVolumeSpecName: "config-data") pod "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" (UID: "c1e44dd4-d920-49dc-8581-5fcfcbb1db9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.566070 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.566130 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.620324 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.631901 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.637072 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.650605 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651719 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651756 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651781 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651790 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651818 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651831 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651846 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651855 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: E0203 10:27:08.651886 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.651897 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652275 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="proxy-httpd" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652305 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" containerName="nova-cell0-conductor-db-sync" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652328 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-central-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652342 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="ceilometer-notification-agent" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.652358 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" containerName="sg-core" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.658392 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.661677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.662110 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.674051 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.770717 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.770958 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771118 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771339 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") pod \"49ca9130-4a3c-4c64-8557-5c5e29df551d\" (UID: \"49ca9130-4a3c-4c64-8557-5c5e29df551d\") " Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771896 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.771958 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772175 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772231 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772274 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.772333 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.775592 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts" (OuterVolumeSpecName: "scripts") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.776046 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz" (OuterVolumeSpecName: "kube-api-access-7t4sz") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "kube-api-access-7t4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.801987 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.805333 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data" (OuterVolumeSpecName: "config-data") pod "49ca9130-4a3c-4c64-8557-5c5e29df551d" (UID: "49ca9130-4a3c-4c64-8557-5c5e29df551d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.874905 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875068 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875132 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875269 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875333 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875464 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875483 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875499 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4sz\" (UniqueName: \"kubernetes.io/projected/49ca9130-4a3c-4c64-8557-5c5e29df551d-kube-api-access-7t4sz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.875514 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca9130-4a3c-4c64-8557-5c5e29df551d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.876680 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.879307 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.880817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.880932 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.881026 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.882634 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.897797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"ceilometer-0\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " pod="openstack/ceilometer-0" Feb 03 10:27:08 crc kubenswrapper[5010]: I0203 10:27:08.990481 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299488 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" event={"ID":"49ca9130-4a3c-4c64-8557-5c5e29df551d","Type":"ContainerDied","Data":"0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36"} Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299841 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adb2c17444ab86300890aee767fdf0a4d7295fac27461d1c7107972deeb4e36" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.299541 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gd6dz" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.444091 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.448040 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.454420 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kdpzn" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.455394 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.486286 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493791 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.493967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.554843 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596770 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596870 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.596930 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.605208 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.606865 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26dec936-0343-4d5f-8f2b-cf2a797786b5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.617591 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88pp\" (UniqueName: \"kubernetes.io/projected/26dec936-0343-4d5f-8f2b-cf2a797786b5-kube-api-access-k88pp\") pod \"nova-cell0-conductor-0\" (UID: \"26dec936-0343-4d5f-8f2b-cf2a797786b5\") " pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:09 crc kubenswrapper[5010]: I0203 10:27:09.790179 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.320697 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"cd6841d336caf71fc510297facb1277599cbdeca80d5b944442ca08505d329ae"} Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.331792 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 10:27:10 crc kubenswrapper[5010]: I0203 10:27:10.520802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e44dd4-d920-49dc-8581-5fcfcbb1db9a" path="/var/lib/kubelet/pods/c1e44dd4-d920-49dc-8581-5fcfcbb1db9a/volumes" Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.334441 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"26dec936-0343-4d5f-8f2b-cf2a797786b5","Type":"ContainerStarted","Data":"294e87c7889391f0b738633cfb50158a96d7c8fa5e589924d23c5e027c882204"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.335015 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"26dec936-0343-4d5f-8f2b-cf2a797786b5","Type":"ContainerStarted","Data":"90dc7ccf86efebfda76973b3da7ae5f518b3f3eb365eb4de3b95d035762bfb99"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.337242 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.339140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} Feb 03 10:27:11 crc kubenswrapper[5010]: I0203 10:27:11.382614 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.382572767 podStartE2EDuration="2.382572767s" podCreationTimestamp="2026-02-03 10:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:11.369919282 +0000 UTC m=+1501.525895411" watchObservedRunningTime="2026-02-03 10:27:11.382572767 +0000 UTC m=+1501.538548896" Feb 03 10:27:13 crc kubenswrapper[5010]: I0203 10:27:13.368574 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} Feb 03 10:27:14 crc kubenswrapper[5010]: I0203 10:27:14.385252 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.390729 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.391631 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.422896 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerStarted","Data":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.426532 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:27:16 crc kubenswrapper[5010]: I0203 10:27:16.468183 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.251652198 podStartE2EDuration="8.468134829s" podCreationTimestamp="2026-02-03 10:27:08 +0000 UTC" firstStartedPulling="2026-02-03 10:27:09.536345622 +0000 UTC m=+1499.692321751" lastFinishedPulling="2026-02-03 10:27:15.752828253 +0000 UTC m=+1505.908804382" observedRunningTime="2026-02-03 10:27:16.466151908 +0000 UTC m=+1506.622128027" watchObservedRunningTime="2026-02-03 10:27:16.468134829 +0000 UTC m=+1506.624110978" Feb 03 10:27:19 crc kubenswrapper[5010]: I0203 10:27:19.829129 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.392466 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.394565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.400836 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.401261 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.416309 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.441893 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442105 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442235 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.442317 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.545557 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546286 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.546599 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.555057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.556580 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.568447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.579174 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"nova-cell0-cell-mapping-bqztf\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.726422 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.729623 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.735843 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.737613 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.750965 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.820804 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.827081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.840449 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.853665 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.867956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868044 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868114 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868140 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868174 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868230 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.868268 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.881127 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.885462 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.889703 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.910325 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972273 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972371 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972654 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972752 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.972879 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.982939 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.985387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:20 crc kubenswrapper[5010]: I0203 10:27:20.987255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.016761 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.021051 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"nova-api-0\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.021104 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.029913 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"nova-scheduler-0\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.060110 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079151 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.079402 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.088777 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.091773 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.155892 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.158051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.182776 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"nova-cell1-novncproxy-0\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.182970 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183045 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183168 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.183731 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.231881 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.243280 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.245803 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.299622 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.301475 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.301639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302031 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302138 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.302396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.303539 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.340799 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.349793 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.357702 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"nova-metadata-0\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.369412 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.409397 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.422465 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.422859 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423193 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423284 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.423319 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527408 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527442 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527481 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.527583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.528952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.532109 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.537288 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.538047 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.542984 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.543448 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.545284 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.579607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"dnsmasq-dns-757b4f8459-x25nd\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.702491 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:21 crc kubenswrapper[5010]: I0203 10:27:21.837105 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.055978 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: W0203 10:27:22.059884 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae76c0d_99bf_42f4_8678_5c1693262ecc.slice/crio-6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5 WatchSource:0}: Error finding container 6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5: Status 404 returned error can't find the container with id 6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5 Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.319090 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.322333 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.325693 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.327316 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.373461 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.398096 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399249 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399481 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399643 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.399840 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.459103 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502111 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.502929 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.511432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.514878 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.516100 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.530331 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"nova-cell1-conductor-db-sync-zwnxk\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.593491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerStarted","Data":"bf460f6ef526dd4f94d755e6904b0e4b071bb805f8064c527674ef4f7512a907"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.594385 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.595624 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerStarted","Data":"9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.595660 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerStarted","Data":"2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.608264 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.610937 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerStarted","Data":"53f9f5ad7c65c9cd148ac8aad3fd34e98580d6dfe75ba51eece28e29be12ce47"} Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.669469 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.731465 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bqztf" podStartSLOduration=2.731437263 podStartE2EDuration="2.731437263s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:22.651652134 +0000 UTC m=+1512.807628263" watchObservedRunningTime="2026-02-03 10:27:22.731437263 +0000 UTC m=+1512.887413392" Feb 03 10:27:22 crc kubenswrapper[5010]: I0203 10:27:22.844531 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.457309 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:27:23 crc kubenswrapper[5010]: W0203 10:27:23.484495 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726ff8cb_3f2f_41a6_a61e_a79ed194505f.slice/crio-06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b WatchSource:0}: Error finding container 06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b: Status 404 returned error can't find the container with id 06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641029 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" exitCode=137 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641164 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641295 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" containerID="cri-o://d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" gracePeriod=30 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641442 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cdcd56868-k9h7g" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" containerID="cri-o://ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" gracePeriod=30 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641365 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerStarted","Data":"ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.641672 5010 scope.go:117] "RemoveContainer" containerID="2cc2ce22d6ea86e28f6eb264d0d9c9e725b7685d6ab0fd02531064a6b9b028b0" Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.649515 5010 generic.go:334] "Generic (PLEG): container finished" podID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" exitCode=0 Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.649620 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.650523 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerStarted","Data":"7edb2d5b18afc723b6414cab56e64b2430add9e831d1db279a0d0981b7c44bb5"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.654286 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"5fcbbf7f928cc0dae4b0f264be7c99f38aab374b25b87187f9d00a621247d310"} Feb 03 10:27:23 crc kubenswrapper[5010]: I0203 10:27:23.663074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerStarted","Data":"06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b"} Feb 03 10:27:24 crc kubenswrapper[5010]: I0203 10:27:24.676711 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerStarted","Data":"9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08"} Feb 03 10:27:24 crc kubenswrapper[5010]: I0203 10:27:24.714748 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" podStartSLOduration=2.714707497 podStartE2EDuration="2.714707497s" podCreationTimestamp="2026-02-03 10:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:24.700225385 +0000 UTC m=+1514.856201524" watchObservedRunningTime="2026-02-03 10:27:24.714707497 +0000 UTC m=+1514.870683626" Feb 03 10:27:25 crc kubenswrapper[5010]: I0203 10:27:25.208305 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:25 crc kubenswrapper[5010]: I0203 10:27:25.221097 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.717272 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerStarted","Data":"ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.719951 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerStarted","Data":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.717519 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" gracePeriod=30 Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.734934 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.746949 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.095624143 podStartE2EDuration="7.746918345s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.386652839 +0000 UTC m=+1512.542628958" lastFinishedPulling="2026-02-03 10:27:27.037947031 +0000 UTC m=+1517.193923160" observedRunningTime="2026-02-03 10:27:27.741955978 +0000 UTC m=+1517.897932117" watchObservedRunningTime="2026-02-03 10:27:27.746918345 +0000 UTC m=+1517.902894474" Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.755741 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerStarted","Data":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.756195 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.760546 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} Feb 03 10:27:27 crc kubenswrapper[5010]: I0203 10:27:27.776907 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.111559852 podStartE2EDuration="7.776877744s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.369565631 +0000 UTC m=+1512.525541760" lastFinishedPulling="2026-02-03 10:27:27.034883523 +0000 UTC m=+1517.190859652" observedRunningTime="2026-02-03 10:27:27.766243721 +0000 UTC m=+1517.922219850" watchObservedRunningTime="2026-02-03 10:27:27.776877744 +0000 UTC m=+1517.932853873" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.775524 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerStarted","Data":"c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6"} Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.782861 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" containerID="cri-o://30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" gracePeriod=30 Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.783213 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerStarted","Data":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.783816 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" containerID="cri-o://3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" gracePeriod=30 Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.807903 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" podStartSLOduration=7.807877278 podStartE2EDuration="7.807877278s" podCreationTimestamp="2026-02-03 10:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:27.793679616 +0000 UTC m=+1517.949655745" watchObservedRunningTime="2026-02-03 10:27:28.807877278 +0000 UTC m=+1518.963853407" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.813580 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.836462235 podStartE2EDuration="8.813553663s" podCreationTimestamp="2026-02-03 10:27:20 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.072744219 +0000 UTC m=+1512.228720348" lastFinishedPulling="2026-02-03 10:27:27.049835647 +0000 UTC m=+1517.205811776" observedRunningTime="2026-02-03 10:27:28.803972847 +0000 UTC m=+1518.959948976" watchObservedRunningTime="2026-02-03 10:27:28.813553663 +0000 UTC m=+1518.969529792" Feb 03 10:27:28 crc kubenswrapper[5010]: I0203 10:27:28.846812 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.406280887 podStartE2EDuration="7.846785957s" podCreationTimestamp="2026-02-03 10:27:21 +0000 UTC" firstStartedPulling="2026-02-03 10:27:22.594380853 +0000 UTC m=+1512.750356982" lastFinishedPulling="2026-02-03 10:27:27.034885923 +0000 UTC m=+1517.190862052" observedRunningTime="2026-02-03 10:27:28.841361577 +0000 UTC m=+1518.997337706" watchObservedRunningTime="2026-02-03 10:27:28.846785957 +0000 UTC m=+1519.002762086" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.452812 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553116 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553324 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553488 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553535 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") pod \"7e9abb34-c41e-4b86-835c-1107ad5eec49\" (UID: \"7e9abb34-c41e-4b86-835c-1107ad5eec49\") " Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.553858 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs" (OuterVolumeSpecName: "logs") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.555673 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e9abb34-c41e-4b86-835c-1107ad5eec49-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.564997 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz" (OuterVolumeSpecName: "kube-api-access-tqcdz") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "kube-api-access-tqcdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.610413 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.616437 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data" (OuterVolumeSpecName: "config-data") pod "7e9abb34-c41e-4b86-835c-1107ad5eec49" (UID: "7e9abb34-c41e-4b86-835c-1107ad5eec49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658402 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658453 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e9abb34-c41e-4b86-835c-1107ad5eec49-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.658467 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcdz\" (UniqueName: \"kubernetes.io/projected/7e9abb34-c41e-4b86-835c-1107ad5eec49-kube-api-access-tqcdz\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.801512 5010 generic.go:334] "Generic (PLEG): container finished" podID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" exitCode=0 Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.801552 5010 generic.go:334] "Generic (PLEG): container finished" podID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" exitCode=143 Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802606 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802652 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802691 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e9abb34-c41e-4b86-835c-1107ad5eec49","Type":"ContainerDied","Data":"5fcbbf7f928cc0dae4b0f264be7c99f38aab374b25b87187f9d00a621247d310"} Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.802733 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.839591 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.865534 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889205 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.889803 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889838 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} err="failed to get container status \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.889864 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.891588 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.891630 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} err="failed to get container status \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.891653 5010 scope.go:117] "RemoveContainer" containerID="3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.892547 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0"} err="failed to get container status \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": rpc error: code = NotFound desc = could not find container \"3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0\": container with ID starting with 3c414afcd4b8af6622acb054ec23b94b5df4af0d100b01d492d193ab6409dbb0 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.892594 5010 scope.go:117] "RemoveContainer" containerID="30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.894027 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303"} err="failed to get container status \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": rpc error: code = NotFound desc = could not find container \"30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303\": container with ID starting with 30415f201ca80920d3fda4a6c527cfa9fabeeda332a6e1dbd4d91d738d45e303 not found: ID does not exist" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.899771 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.910968 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.911941 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.911988 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: E0203 10:27:29.912062 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912075 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912364 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-log" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.912396 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" containerName="nova-metadata-metadata" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.914581 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.918237 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.918334 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.925527 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964515 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964559 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:29 crc kubenswrapper[5010]: I0203 10:27:29.964642 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.065866 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.065942 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066107 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066150 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.066604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.070068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.071839 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.077040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.084239 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"nova-metadata-0\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.249651 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.527098 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9abb34-c41e-4b86-835c-1107ad5eec49" path="/var/lib/kubelet/pods/7e9abb34-c41e-4b86-835c-1107ad5eec49/volumes" Feb 03 10:27:30 crc kubenswrapper[5010]: I0203 10:27:30.992539 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.061473 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.061551 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.242434 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.242519 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.248407 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.305375 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.829704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.830180 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.830199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerStarted","Data":"d287adc54325882a622782a3232f723bb21563ecbced55297361e7dc2d758abc"} Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.866763 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.866639748 podStartE2EDuration="2.866639748s" podCreationTimestamp="2026-02-03 10:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:31.858387846 +0000 UTC m=+1522.014363975" watchObservedRunningTime="2026-02-03 10:27:31.866639748 +0000 UTC m=+1522.022615887" Feb 03 10:27:31 crc kubenswrapper[5010]: I0203 10:27:31.895600 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.148545 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.148739 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:32 crc kubenswrapper[5010]: I0203 10:27:32.804208 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.250933 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.251455 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.877857 5010 generic.go:334] "Generic (PLEG): container finished" podID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerID="9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08" exitCode=0 Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.877935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerDied","Data":"9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08"} Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.880342 5010 generic.go:334] "Generic (PLEG): container finished" podID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerID="9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8" exitCode=0 Feb 03 10:27:35 crc kubenswrapper[5010]: I0203 10:27:35.880392 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerDied","Data":"9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8"} Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.712580 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.837402 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:36 crc kubenswrapper[5010]: I0203 10:27:36.839318 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" containerID="cri-o://fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" gracePeriod=10 Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.606801 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722736 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722801 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722834 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.722905 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") pod \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\" (UID: \"726ff8cb-3f2f-41a6-a61e-a79ed194505f\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.731794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts" (OuterVolumeSpecName: "scripts") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.732636 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.733573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj" (OuterVolumeSpecName: "kube-api-access-w4rkj") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "kube-api-access-w4rkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.745878 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.765613 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data" (OuterVolumeSpecName: "config-data") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.793416 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726ff8cb-3f2f-41a6-a61e-a79ed194505f" (UID: "726ff8cb-3f2f-41a6-a61e-a79ed194505f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825135 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825258 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825352 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.825507 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") pod \"bd352716-06a1-47da-9d5d-179bfed70cbe\" (UID: \"bd352716-06a1-47da-9d5d-179bfed70cbe\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826188 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4rkj\" (UniqueName: \"kubernetes.io/projected/726ff8cb-3f2f-41a6-a61e-a79ed194505f-kube-api-access-w4rkj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826211 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826235 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.826245 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726ff8cb-3f2f-41a6-a61e-a79ed194505f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.832186 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts" (OuterVolumeSpecName: "scripts") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.832869 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc" (OuterVolumeSpecName: "kube-api-access-jjhgc") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "kube-api-access-jjhgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.886021 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data" (OuterVolumeSpecName: "config-data") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.886081 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd352716-06a1-47da-9d5d-179bfed70cbe" (UID: "bd352716-06a1-47da-9d5d-179bfed70cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.927739 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928035 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928083 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928209 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928330 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.928469 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") pod \"b88c8b02-54df-4761-acc8-c959005f4444\" (UID: \"b88c8b02-54df-4761-acc8-c959005f4444\") " Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929031 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929048 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929062 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhgc\" (UniqueName: \"kubernetes.io/projected/bd352716-06a1-47da-9d5d-179bfed70cbe-kube-api-access-jjhgc\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.929071 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd352716-06a1-47da-9d5d-179bfed70cbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.933597 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d" (OuterVolumeSpecName: "kube-api-access-d8w9d") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "kube-api-access-d8w9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945584 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945742 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zwnxk" event={"ID":"726ff8cb-3f2f-41a6-a61e-a79ed194505f","Type":"ContainerDied","Data":"06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.945820 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bc716526af09e9468bec49130055a7e19cac3913d0b3e2ec8f37184dcd4c5b" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978511 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bqztf" event={"ID":"bd352716-06a1-47da-9d5d-179bfed70cbe","Type":"ContainerDied","Data":"2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978594 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bad36a390bd1a99859cef6466645f1e43e62c5d6ab7ef7aed9fbbdabd1bb08c" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.978729 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bqztf" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994327 5010 generic.go:334] "Generic (PLEG): container finished" podID="b88c8b02-54df-4761-acc8-c959005f4444" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" exitCode=0 Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994406 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994461 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" event={"ID":"b88c8b02-54df-4761-acc8-c959005f4444","Type":"ContainerDied","Data":"2d51e4ddd011d0ec5a5a6ac940b6dc440f8c2ebbdfedfd082c8cf295f749780f"} Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994493 5010 scope.go:117] "RemoveContainer" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:37 crc kubenswrapper[5010]: I0203 10:27:37.994718 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6vbfz" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.013274 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.032139 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8w9d\" (UniqueName: \"kubernetes.io/projected/b88c8b02-54df-4761-acc8-c959005f4444-kube-api-access-d8w9d\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.032198 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.072625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config" (OuterVolumeSpecName: "config") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.080794 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.122591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.125954 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126673 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126698 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126741 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126749 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126769 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="init" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126776 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="init" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.126804 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.126811 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127069 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" containerName="nova-cell1-conductor-db-sync" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127090 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" containerName="nova-manage" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.127100 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88c8b02-54df-4761-acc8-c959005f4444" containerName="dnsmasq-dns" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.128024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135834 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135875 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.135888 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.138155 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.145990 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.190414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b88c8b02-54df-4761-acc8-c959005f4444" (UID: "b88c8b02-54df-4761-acc8-c959005f4444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.226382 5010 scope.go:117] "RemoveContainer" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.237938 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238074 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238152 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.238363 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b88c8b02-54df-4761-acc8-c959005f4444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.240128 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.240603 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" containerID="cri-o://241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.241048 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" containerID="cri-o://c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.367609 5010 scope.go:117] "RemoveContainer" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.368781 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": container with ID starting with fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77 not found: ID does not exist" containerID="fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.368901 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77"} err="failed to get container status \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": rpc error: code = NotFound desc = could not find container \"fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77\": container with ID starting with fdfb99b919da4976435885faa64d8714eb8c94a1e3131223fba09ac5b0a6ca77 not found: ID does not exist" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.369014 5010 scope.go:117] "RemoveContainer" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: E0203 10:27:38.369361 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": container with ID starting with 49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50 not found: ID does not exist" containerID="49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.369505 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50"} err="failed to get container status \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": rpc error: code = NotFound desc = could not find container \"49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50\": container with ID starting with 49ff5a76d40c8d3740c82b06df88f2bec310e05f57c31efe76c162d534248c50 not found: ID does not exist" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.390958 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.391349 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" containerID="cri-o://fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.395613 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.395830 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.396307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.408582 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.413179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/291a9878-85fe-4988-8a7d-1da10ac49b23-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.478435 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.478779 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" containerID="cri-o://62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.479249 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kcxg\" (UniqueName: \"kubernetes.io/projected/291a9878-85fe-4988-8a7d-1da10ac49b23-kube-api-access-8kcxg\") pod \"nova-cell1-conductor-0\" (UID: \"291a9878-85fe-4988-8a7d-1da10ac49b23\") " pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.479440 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" containerID="cri-o://add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" gracePeriod=30 Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.529903 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.530293 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6vbfz"] Feb 03 10:27:38 crc kubenswrapper[5010]: I0203 10:27:38.670085 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.016204 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.058078 5010 generic.go:334] "Generic (PLEG): container finished" podID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" exitCode=143 Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.058281 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.075803 5010 generic.go:334] "Generic (PLEG): container finished" podID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerID="241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" exitCode=143 Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.075975 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e"} Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.402232 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.828800 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.940101 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.940838 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941085 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941428 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.941474 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") pod \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\" (UID: \"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e\") " Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.942026 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs" (OuterVolumeSpecName: "logs") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.949481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd" (OuterVolumeSpecName: "kube-api-access-ddtdd") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "kube-api-access-ddtdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:39 crc kubenswrapper[5010]: I0203 10:27:39.989878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data" (OuterVolumeSpecName: "config-data") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.002356 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.017096 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" (UID: "9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044883 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044940 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044954 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddtdd\" (UniqueName: \"kubernetes.io/projected/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-kube-api-access-ddtdd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044970 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.044985 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093674 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"291a9878-85fe-4988-8a7d-1da10ac49b23","Type":"ContainerStarted","Data":"da94971cc58ba2c42c3ad1836afff46400802415777abe34ceadccb5855776c3"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093743 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"291a9878-85fe-4988-8a7d-1da10ac49b23","Type":"ContainerStarted","Data":"85cecab4f6c9af2519d22c0f5ca34ce44fde0330b3c97d2f01236561ec50ec88"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.093771 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096730 5010 generic.go:334] "Generic (PLEG): container finished" podID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" exitCode=0 Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096778 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096874 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e","Type":"ContainerDied","Data":"d287adc54325882a622782a3232f723bb21563ecbced55297361e7dc2d758abc"} Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.096901 5010 scope.go:117] "RemoveContainer" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.126395 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.126352433 podStartE2EDuration="2.126352433s" podCreationTimestamp="2026-02-03 10:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:40.115849704 +0000 UTC m=+1530.271825853" watchObservedRunningTime="2026-02-03 10:27:40.126352433 +0000 UTC m=+1530.282328582" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.133591 5010 scope.go:117] "RemoveContainer" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.162187 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.178791 5010 scope.go:117] "RemoveContainer" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.183683 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.185516 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": container with ID starting with add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2 not found: ID does not exist" containerID="add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.185616 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2"} err="failed to get container status \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": rpc error: code = NotFound desc = could not find container \"add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2\": container with ID starting with add5ac144dfc3556fd42254b1aa65042c00350b49395c269e432f30eb5babec2 not found: ID does not exist" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.185667 5010 scope.go:117] "RemoveContainer" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.189828 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": container with ID starting with 62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8 not found: ID does not exist" containerID="62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.189905 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8"} err="failed to get container status \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": rpc error: code = NotFound desc = could not find container \"62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8\": container with ID starting with 62df5f5c6328064e8ca72f39444b7e8408e2ae8c3cd7d34a5972230c67fcf2c8 not found: ID does not exist" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.201361 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.202233 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202266 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: E0203 10:27:40.202396 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202409 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202705 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-log" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.202753 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" containerName="nova-metadata-metadata" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.204441 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.209935 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.210242 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.217982 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256187 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256327 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256364 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256392 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.256536 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358656 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358872 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358899 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.358922 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.360241 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.366010 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.366798 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.367119 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.380296 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"nova-metadata-0\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " pod="openstack/nova-metadata-0" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.517791 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e" path="/var/lib/kubelet/pods/9f85f9fc-d39c-48eb-b74c-f62aa2f2d22e/volumes" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.518531 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88c8b02-54df-4761-acc8-c959005f4444" path="/var/lib/kubelet/pods/b88c8b02-54df-4761-acc8-c959005f4444/volumes" Feb 03 10:27:40 crc kubenswrapper[5010]: I0203 10:27:40.552601 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.116584 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:27:41 crc kubenswrapper[5010]: W0203 10:27:41.141703 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c43ac79_0458_4b95_a9fd_26bc038c195b.slice/crio-d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82 WatchSource:0}: Error finding container d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82: Status 404 returned error can't find the container with id d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82 Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.239450 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.249415 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.252378 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:27:41 crc kubenswrapper[5010]: E0203 10:27:41.252503 5010 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.690961 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836271 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836447 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:41 crc kubenswrapper[5010]: I0203 10:27:41.836648 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") pod \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\" (UID: \"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e\") " Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.096725 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s" (OuterVolumeSpecName: "kube-api-access-srb2s") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "kube-api-access-srb2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.122934 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srb2s\" (UniqueName: \"kubernetes.io/projected/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-kube-api-access-srb2s\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.147319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data" (OuterVolumeSpecName: "config-data") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.217972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.218072 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.225313 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239518 5010 generic.go:334] "Generic (PLEG): container finished" podID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" exitCode=0 Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239596 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerDied","Data":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d95db89-dc92-4f4e-9371-a9dfcf2eb54e","Type":"ContainerDied","Data":"bf460f6ef526dd4f94d755e6904b0e4b071bb805f8064c527674ef4f7512a907"} Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239665 5010 scope.go:117] "RemoveContainer" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.239918 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.358901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" (UID: "3d95db89-dc92-4f4e-9371-a9dfcf2eb54e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.431164 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.454954 5010 scope.go:117] "RemoveContainer" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: E0203 10:27:42.455606 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": container with ID starting with fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d not found: ID does not exist" containerID="fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.455677 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d"} err="failed to get container status \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": rpc error: code = NotFound desc = could not find container \"fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d\": container with ID starting with fb18e33d07a54ce264f7ae7f504ac6bbe2f7193412593ce651e6c106526cce6d not found: ID does not exist" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.569426 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.587135 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.603691 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: E0203 10:27:42.604350 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.604418 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.604679 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" containerName="nova-scheduler-scheduler" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.605372 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.608288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.614855 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637087 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637463 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.637520 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739671 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.739849 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.745195 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.745692 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:42 crc kubenswrapper[5010]: I0203 10:27:42.776002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"nova-scheduler-0\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " pod="openstack/nova-scheduler-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.256051 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.302320 5010 generic.go:334] "Generic (PLEG): container finished" podID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerID="c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" exitCode=0 Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.302460 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6"} Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.305904 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerStarted","Data":"a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e"} Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.894797 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.920827 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.920805864 podStartE2EDuration="3.920805864s" podCreationTimestamp="2026-02-03 10:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:43.393820373 +0000 UTC m=+1533.549796512" watchObservedRunningTime="2026-02-03 10:27:43.920805864 +0000 UTC m=+1534.076781993" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.971746 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972010 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.972156 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") pod \"dae76c0d-99bf-42f4-8678-5c1693262ecc\" (UID: \"dae76c0d-99bf-42f4-8678-5c1693262ecc\") " Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.976703 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs" (OuterVolumeSpecName: "logs") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.985443 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm" (OuterVolumeSpecName: "kube-api-access-ndfcm") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "kube-api-access-ndfcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:43 crc kubenswrapper[5010]: I0203 10:27:43.995357 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.038816 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data" (OuterVolumeSpecName: "config-data") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074603 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074650 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfcm\" (UniqueName: \"kubernetes.io/projected/dae76c0d-99bf-42f4-8678-5c1693262ecc-kube-api-access-ndfcm\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.074667 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae76c0d-99bf-42f4-8678-5c1693262ecc-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.085791 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae76c0d-99bf-42f4-8678-5c1693262ecc" (UID: "dae76c0d-99bf-42f4-8678-5c1693262ecc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.177681 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae76c0d-99bf-42f4-8678-5c1693262ecc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.337534 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerStarted","Data":"58f162aa3d6e537665ac2963288a9914168137aa741e22132f9fea00cc29574c"} Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.346418 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.349452 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dae76c0d-99bf-42f4-8678-5c1693262ecc","Type":"ContainerDied","Data":"6078c7a1e48bd775bca8b987098ebda1a5e82da5d6e8ba44c4019d49bd1f8dd5"} Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.349532 5010 scope.go:117] "RemoveContainer" containerID="c99bed3bf87dd9576980ecaf735b0a2713f9773f5d114b1af04d87bd2cd7c5e6" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.408366 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.420713 5010 scope.go:117] "RemoveContainer" containerID="241c9e9f88442e26f4c60b5bf7f593615d35fb056df34c097b437a3289e1ed1e" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.428044 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.470939 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: E0203 10:27:44.471530 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471550 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: E0203 10:27:44.471567 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471573 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471777 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-api" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.471805 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" containerName="nova-api-log" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.540298 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.546849 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.584933 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d95db89-dc92-4f4e-9371-a9dfcf2eb54e" path="/var/lib/kubelet/pods/3d95db89-dc92-4f4e-9371-a9dfcf2eb54e/volumes" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.585831 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae76c0d-99bf-42f4-8678-5c1693262ecc" path="/var/lib/kubelet/pods/dae76c0d-99bf-42f4-8678-5c1693262ecc/volumes" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.587365 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.700874 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.700981 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.701492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.701875 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806551 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806786 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.806884 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.808654 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.819488 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.835604 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.848778 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"nova-api-0\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " pod="openstack/nova-api-0" Feb 03 10:27:44 crc kubenswrapper[5010]: I0203 10:27:44.891739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.553349 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.555185 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.579168 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerStarted","Data":"3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3"} Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.627142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:27:45 crc kubenswrapper[5010]: W0203 10:27:45.634543 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod341c8347_e47b_42c7_ace7_acb55f2b8c0f.slice/crio-c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06 WatchSource:0}: Error finding container c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06: Status 404 returned error can't find the container with id c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06 Feb 03 10:27:45 crc kubenswrapper[5010]: I0203 10:27:45.640017 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.6399853269999998 podStartE2EDuration="3.639985327s" podCreationTimestamp="2026-02-03 10:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:45.604172928 +0000 UTC m=+1535.760149057" watchObservedRunningTime="2026-02-03 10:27:45.639985327 +0000 UTC m=+1535.795961446" Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.390132 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.390609 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.596807 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.597156 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.597193 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerStarted","Data":"c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06"} Feb 03 10:27:46 crc kubenswrapper[5010]: I0203 10:27:46.641101 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6410710330000002 podStartE2EDuration="2.641071033s" podCreationTimestamp="2026-02-03 10:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:27:46.626047097 +0000 UTC m=+1536.782023216" watchObservedRunningTime="2026-02-03 10:27:46.641071033 +0000 UTC m=+1536.797047162" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.123936 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.125123 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" containerID="cri-o://8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" gracePeriod=30 Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.634612 5010 generic.go:334] "Generic (PLEG): container finished" podID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerID="8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" exitCode=2 Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.636419 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerDied","Data":"8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53"} Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.777758 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.876235 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") pod \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\" (UID: \"7b0ebfb6-7019-4de6-88df-b2161da95e9b\") " Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.899571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4" (OuterVolumeSpecName: "kube-api-access-lxkf4") pod "7b0ebfb6-7019-4de6-88df-b2161da95e9b" (UID: "7b0ebfb6-7019-4de6-88df-b2161da95e9b"). InnerVolumeSpecName "kube-api-access-lxkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:47 crc kubenswrapper[5010]: I0203 10:27:47.979956 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxkf4\" (UniqueName: \"kubernetes.io/projected/7b0ebfb6-7019-4de6-88df-b2161da95e9b-kube-api-access-lxkf4\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.262391 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648127 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7b0ebfb6-7019-4de6-88df-b2161da95e9b","Type":"ContainerDied","Data":"99eae2ce273fff1db7b69f1325ef839ad84ecc780d3634ec59776f868fb7d556"} Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648176 5010 scope.go:117] "RemoveContainer" containerID="8566fd9acbf9b37a7c0e5b8b574fab43fa6c097fb1878bb86a8c41a2e79e2d53" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.648328 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.696937 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.713685 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.723397 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.732775 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: E0203 10:27:48.733467 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.733489 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.733732 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" containerName="kube-state-metrics" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.734635 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.738630 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.738642 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.778248 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.901621 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.901979 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.902200 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:48 crc kubenswrapper[5010]: I0203 10:27:48.902257 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165232 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165307 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165414 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.165672 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.174300 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.175843 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.178466 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de374df0-0b73-4be2-9719-d4b471782ed4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.192470 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6nx5\" (UniqueName: \"kubernetes.io/projected/de374df0-0b73-4be2-9719-d4b471782ed4-kube-api-access-h6nx5\") pod \"kube-state-metrics-0\" (UID: \"de374df0-0b73-4be2-9719-d4b471782ed4\") " pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.365053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 10:27:49 crc kubenswrapper[5010]: W0203 10:27:49.731984 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde374df0_0b73_4be2_9719_d4b471782ed4.slice/crio-aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105 WatchSource:0}: Error finding container aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105: Status 404 returned error can't find the container with id aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.749533 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.863584 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.863963 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" containerID="cri-o://bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864051 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" containerID="cri-o://9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864051 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" containerID="cri-o://7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" gracePeriod=30 Feb 03 10:27:49 crc kubenswrapper[5010]: I0203 10:27:49.864054 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" containerID="cri-o://f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" gracePeriod=30 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.518591 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0ebfb6-7019-4de6-88df-b2161da95e9b" path="/var/lib/kubelet/pods/7b0ebfb6-7019-4de6-88df-b2161da95e9b/volumes" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.554020 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.557260 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de374df0-0b73-4be2-9719-d4b471782ed4","Type":"ContainerStarted","Data":"09638d8f14a0e6990096afb1a2128a2b41505deb68f4c5a411beb7b5380a0fba"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729799 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"de374df0-0b73-4be2-9719-d4b471782ed4","Type":"ContainerStarted","Data":"aee893da80b4786c451fe90946be81becfbec886f6a9282b8ea893166a62a105"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.729870 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.762024 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" exitCode=0 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763405 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.763336 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" exitCode=2 Feb 03 10:27:50 crc kubenswrapper[5010]: I0203 10:27:50.791880 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.237819507 podStartE2EDuration="2.791852813s" podCreationTimestamp="2026-02-03 10:27:48 +0000 UTC" firstStartedPulling="2026-02-03 10:27:49.736842463 +0000 UTC m=+1539.892818582" lastFinishedPulling="2026-02-03 10:27:50.290875759 +0000 UTC m=+1540.446851888" observedRunningTime="2026-02-03 10:27:50.761685328 +0000 UTC m=+1540.917661457" watchObservedRunningTime="2026-02-03 10:27:50.791852813 +0000 UTC m=+1540.947828942" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.833054 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.840663 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.879936 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" exitCode=0 Feb 03 10:27:51 crc kubenswrapper[5010]: I0203 10:27:51.880921 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.608134 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671395 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671495 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671562 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671587 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671762 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.671874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") pod \"07964b2d-a893-46b5-a01d-c479361c0d37\" (UID: \"07964b2d-a893-46b5-a01d-c479361c0d37\") " Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.672963 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.673447 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.681391 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj" (OuterVolumeSpecName: "kube-api-access-2mzfj") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "kube-api-access-2mzfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.687815 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts" (OuterVolumeSpecName: "scripts") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.762371 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804513 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804701 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07964b2d-a893-46b5-a01d-c479361c0d37-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804802 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804884 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.804953 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mzfj\" (UniqueName: \"kubernetes.io/projected/07964b2d-a893-46b5-a01d-c479361c0d37-kube-api-access-2mzfj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.814689 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.840312 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data" (OuterVolumeSpecName: "config-data") pod "07964b2d-a893-46b5-a01d-c479361c0d37" (UID: "07964b2d-a893-46b5-a01d-c479361c0d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.900960 5010 generic.go:334] "Generic (PLEG): container finished" podID="07964b2d-a893-46b5-a01d-c479361c0d37" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" exitCode=0 Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901376 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901498 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07964b2d-a893-46b5-a01d-c479361c0d37","Type":"ContainerDied","Data":"cd6841d336caf71fc510297facb1277599cbdeca80d5b944442ca08505d329ae"} Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901617 5010 scope.go:117] "RemoveContainer" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.901822 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.913818 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.913867 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07964b2d-a893-46b5-a01d-c479361c0d37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.955143 5010 scope.go:117] "RemoveContainer" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.969993 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:52 crc kubenswrapper[5010]: I0203 10:27:52.981438 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.008973 5010 scope.go:117] "RemoveContainer" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.013043 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.013816 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.013979 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.014091 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.017575 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.017826 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.017916 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.018029 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018143 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018637 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="proxy-httpd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018751 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-notification-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018849 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="sg-core" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.018922 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" containerName="ceilometer-central-agent" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.023565 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.028349 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.040395 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.045948 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.052270 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.066804 5010 scope.go:117] "RemoveContainer" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.163641 5010 scope.go:117] "RemoveContainer" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.165100 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": container with ID starting with 7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e not found: ID does not exist" containerID="7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.165151 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e"} err="failed to get container status \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": rpc error: code = NotFound desc = could not find container \"7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e\": container with ID starting with 7eb86e626fc6425e81cd2f25c795ec2334ea6f49b2d765a5709be8db1c93bd3e not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.165189 5010 scope.go:117] "RemoveContainer" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.165978 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": container with ID starting with f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28 not found: ID does not exist" containerID="f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166002 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28"} err="failed to get container status \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": rpc error: code = NotFound desc = could not find container \"f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28\": container with ID starting with f302c14d86d357f9abadc99fa70153233ab75f37a32c385188137eb1a887ef28 not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166040 5010 scope.go:117] "RemoveContainer" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.166549 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": container with ID starting with 9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd not found: ID does not exist" containerID="9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166576 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd"} err="failed to get container status \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": rpc error: code = NotFound desc = could not find container \"9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd\": container with ID starting with 9436c7380821578e2f7d1ea7890a0bc427d5821136dd8d51794315dacd0732dd not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.166612 5010 scope.go:117] "RemoveContainer" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: E0203 10:27:53.167013 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": container with ID starting with bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee not found: ID does not exist" containerID="bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.167031 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee"} err="failed to get container status \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": rpc error: code = NotFound desc = could not find container \"bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee\": container with ID starting with bbaa765d6d6c8ed69b47dfe8f9bde9c41c7176bba9a104b4afd63cd47742e4ee not found: ID does not exist" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225417 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225502 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225544 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.225969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226045 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226725 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.226757 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.258807 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.294000 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329809 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329868 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329921 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.329952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330012 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.330233 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.332898 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.333012 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.337175 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.337964 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.341963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.342908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.347253 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.351326 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"ceilometer-0\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.359648 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:27:53 crc kubenswrapper[5010]: W0203 10:27:53.723958 5010 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6ce46b_7ed7_48c5_a09c_cb39ec7bf34b.slice/crio-df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede/memory.stat: no such device], continuing to push stats Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947664 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" exitCode=137 Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947712 5010 generic.go:334] "Generic (PLEG): container finished" podID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerID="d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" exitCode=137 Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947808 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6"} Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947847 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a"} Feb 03 10:27:53 crc kubenswrapper[5010]: I0203 10:27:53.947881 5010 scope.go:117] "RemoveContainer" containerID="4e9bc8f0d6381cd12e012dcf3fe06eb0672b376af0b818c286309997a48dc607" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.011519 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.062491 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.367546 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.415977 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.416182 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.416318 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417256 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs" (OuterVolumeSpecName: "logs") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417378 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417797 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.417921 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.418006 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") pod \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\" (UID: \"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b\") " Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.418558 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.426360 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.426472 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb" (OuterVolumeSpecName: "kube-api-access-mnlnb") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "kube-api-access-mnlnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.454689 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data" (OuterVolumeSpecName: "config-data") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.465566 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts" (OuterVolumeSpecName: "scripts") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.483415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.516037 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" (UID: "3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.517702 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07964b2d-a893-46b5-a01d-c479361c0d37" path="/var/lib/kubelet/pods/07964b2d-a893-46b5-a01d-c479361c0d37/volumes" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523117 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523159 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523172 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlnb\" (UniqueName: \"kubernetes.io/projected/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-kube-api-access-mnlnb\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523189 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523204 5010 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:54 crc kubenswrapper[5010]: I0203 10:27:54.523231 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:54.988378 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018425 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.017450 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdcd56868-k9h7g" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018473 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdcd56868-k9h7g" event={"ID":"3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b","Type":"ContainerDied","Data":"df9fac7aaf04d2b9be17b46f0957ab58bf3f75ddd22ffd12e196051104d34ede"} Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.018552 5010 scope.go:117] "RemoveContainer" containerID="ccb768185c1be80c1cf2232c6f15632edb6af133c55f2bd369d8a13606beb3d6" Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.031573 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"8b65fa50da6f4624928ff97940b1b888dbd6125f5954bb57d55b8b921aea3ffc"} Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.059140 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.073707 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cdcd56868-k9h7g"] Feb 03 10:27:55 crc kubenswrapper[5010]: I0203 10:27:55.266993 5010 scope.go:117] "RemoveContainer" containerID="d39b7b37971eb5d63b6cabefb740041e4cc9cc6265fc84bc4b6ff52605291d6a" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.028534 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.049854 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.069607 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 10:27:56 crc kubenswrapper[5010]: I0203 10:27:56.512821 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" path="/var/lib/kubelet/pods/3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b/volumes" Feb 03 10:27:57 crc kubenswrapper[5010]: I0203 10:27:57.064163 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} Feb 03 10:27:57 crc kubenswrapper[5010]: I0203 10:27:57.064670 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.080998 5010 generic.go:334] "Generic (PLEG): container finished" podID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerID="ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" exitCode=137 Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.081049 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerDied","Data":"ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12"} Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.239318 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.401480 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.401664 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.402111 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") pod \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\" (UID: \"4df0ad18-8721-40ef-91bc-c609d61f1c1b\") " Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.409049 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj" (OuterVolumeSpecName: "kube-api-access-wx5mj") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "kube-api-access-wx5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.437517 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.450766 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data" (OuterVolumeSpecName: "config-data") pod "4df0ad18-8721-40ef-91bc-c609d61f1c1b" (UID: "4df0ad18-8721-40ef-91bc-c609d61f1c1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505687 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505808 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4df0ad18-8721-40ef-91bc-c609d61f1c1b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:58 crc kubenswrapper[5010]: I0203 10:27:58.505821 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5mj\" (UniqueName: \"kubernetes.io/projected/4df0ad18-8721-40ef-91bc-c609d61f1c1b-kube-api-access-wx5mj\") on node \"crc\" DevicePath \"\"" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4df0ad18-8721-40ef-91bc-c609d61f1c1b","Type":"ContainerDied","Data":"53f9f5ad7c65c9cd148ac8aad3fd34e98580d6dfe75ba51eece28e29be12ce47"} Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098291 5010 scope.go:117] "RemoveContainer" containerID="ae9cd98547d8fff1706d863c1e8f43d79f4ce19a78307424e4a816129ff20e12" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.098442 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.142933 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.158399 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.179867 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180354 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180367 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180387 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180395 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180426 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180448 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180454 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: E0203 10:27:59.180466 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180472 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180638 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180652 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180671 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon-log" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.180681 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.181361 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.185183 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.186415 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.192007 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.192696 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232083 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232148 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232358 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.232498 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.387979 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388041 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388112 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.388161 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.392935 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.395106 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.395606 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.404435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9bd4788-ae5f-49c4-8116-04076a16f4f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.411126 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.416703 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzxx\" (UniqueName: \"kubernetes.io/projected/c9bd4788-ae5f-49c4-8116-04076a16f4f1-kube-api-access-rdzxx\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9bd4788-ae5f-49c4-8116-04076a16f4f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:27:59 crc kubenswrapper[5010]: I0203 10:27:59.527959 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.059118 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.110271 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerStarted","Data":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.110931 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.112993 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9bd4788-ae5f-49c4-8116-04076a16f4f1","Type":"ContainerStarted","Data":"e7cd8fc8c77f5abe94ae0b642f56d423fa0b49fe1c31e908ec0f6a21151fee4a"} Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.195307 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.315045944 podStartE2EDuration="8.195280036s" podCreationTimestamp="2026-02-03 10:27:52 +0000 UTC" firstStartedPulling="2026-02-03 10:27:54.219510485 +0000 UTC m=+1544.375486614" lastFinishedPulling="2026-02-03 10:27:59.099744577 +0000 UTC m=+1549.255720706" observedRunningTime="2026-02-03 10:28:00.154771736 +0000 UTC m=+1550.310747875" watchObservedRunningTime="2026-02-03 10:28:00.195280036 +0000 UTC m=+1550.351256175" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.519388 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df0ad18-8721-40ef-91bc-c609d61f1c1b" path="/var/lib/kubelet/pods/4df0ad18-8721-40ef-91bc-c609d61f1c1b/volumes" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.559526 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.563586 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:00 crc kubenswrapper[5010]: I0203 10:28:00.571592 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.129986 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9bd4788-ae5f-49c4-8116-04076a16f4f1","Type":"ContainerStarted","Data":"6a88f5fdc033f5a697a9e171054489437d18d090c69fe63c010ae837224670c9"} Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.140153 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:01 crc kubenswrapper[5010]: I0203 10:28:01.154869 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.154838355 podStartE2EDuration="2.154838355s" podCreationTimestamp="2026-02-03 10:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:01.148661057 +0000 UTC m=+1551.304637186" watchObservedRunningTime="2026-02-03 10:28:01.154838355 +0000 UTC m=+1551.310814484" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.529141 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.897183 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.897906 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.898657 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:04 crc kubenswrapper[5010]: I0203 10:28:04.904039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.197899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.206042 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.460871 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.479503 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6ce46b-7ed7-48c5-a09c-cb39ec7bf34b" containerName="horizon" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.480661 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.480757 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.642425 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643150 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643261 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643401 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643550 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.643679 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746142 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746258 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746325 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746430 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.746528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747430 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747629 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.747687 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.748258 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.748324 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.770003 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"dnsmasq-dns-89c5cd4d5-5t6hf\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:05 crc kubenswrapper[5010]: I0203 10:28:05.810843 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:06 crc kubenswrapper[5010]: W0203 10:28:06.713360 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod112eb3e9_cf11_4513_be2d_53a42670413e.slice/crio-9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd WatchSource:0}: Error finding container 9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd: Status 404 returned error can't find the container with id 9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd Feb 03 10:28:06 crc kubenswrapper[5010]: I0203 10:28:06.718420 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342151 5010 generic.go:334] "Generic (PLEG): container finished" podID="112eb3e9-cf11-4513-be2d-53a42670413e" containerID="84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639" exitCode=0 Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342273 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639"} Feb 03 10:28:07 crc kubenswrapper[5010]: I0203 10:28:07.342942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerStarted","Data":"9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd"} Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.358503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerStarted","Data":"e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514"} Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.359361 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.401799 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" podStartSLOduration=3.401764178 podStartE2EDuration="3.401764178s" podCreationTimestamp="2026-02-03 10:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:08.382042404 +0000 UTC m=+1558.538018553" watchObservedRunningTime="2026-02-03 10:28:08.401764178 +0000 UTC m=+1558.557740307" Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638694 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" containerID="cri-o://28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.638788 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" containerID="cri-o://af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824561 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824905 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" containerID="cri-o://e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.824993 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" containerID="cri-o://f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.825086 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" containerID="cri-o://640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" gracePeriod=30 Feb 03 10:28:08 crc kubenswrapper[5010]: I0203 10:28:08.825131 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" containerID="cri-o://cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" gracePeriod=30 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508572 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" exitCode=0 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508897 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" exitCode=2 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508956 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.508988 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.530563 5010 generic.go:334] "Generic (PLEG): container finished" podID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" exitCode=143 Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.531585 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.532140 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:09 crc kubenswrapper[5010]: I0203 10:28:09.605540 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.533610 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542668 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" exitCode=0 Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542701 5010 generic.go:334] "Generic (PLEG): container finished" podID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" exitCode=0 Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542761 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542734 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542852 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542864 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"124e7652-b5a0-4a37-af4e-03b4585b6d71","Type":"ContainerDied","Data":"8b65fa50da6f4624928ff97940b1b888dbd6125f5954bb57d55b8b921aea3ffc"} Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.542881 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584330 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584357 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584605 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584674 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584783 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584889 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.584933 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.585064 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.585155 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") pod \"124e7652-b5a0-4a37-af4e-03b4585b6d71\" (UID: \"124e7652-b5a0-4a37-af4e-03b4585b6d71\") " Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.592911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.603423 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.610498 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.616459 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm" (OuterVolumeSpecName: "kube-api-access-77mvm") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "kube-api-access-77mvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.620245 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts" (OuterVolumeSpecName: "scripts") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.694599 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702536 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702661 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702682 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/124e7652-b5a0-4a37-af4e-03b4585b6d71-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702699 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.702723 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77mvm\" (UniqueName: \"kubernetes.io/projected/124e7652-b5a0-4a37-af4e-03b4585b6d71-kube-api-access-77mvm\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.718443 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.725559 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.751841 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.779184 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.793790 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794838 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794868 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794897 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794908 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794953 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794961 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.794981 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.794990 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795305 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-notification-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795340 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="sg-core" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795368 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="ceilometer-central-agent" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.795378 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" containerName="proxy-httpd" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.800001 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.804377 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.805487 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.805517 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.806580 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data" (OuterVolumeSpecName: "config-data") pod "124e7652-b5a0-4a37-af4e-03b4585b6d71" (UID: "124e7652-b5a0-4a37-af4e-03b4585b6d71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808430 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808463 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.808480 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/124e7652-b5a0-4a37-af4e-03b4585b6d71-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.812381 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.813024 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813079 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} err="failed to get container status \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813109 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.813642 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813684 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} err="failed to get container status \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.813727 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.814085 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814110 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} err="failed to get container status \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814127 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: E0203 10:28:10.814501 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814521 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} err="failed to get container status \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.814537 5010 scope.go:117] "RemoveContainer" containerID="f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815133 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398"} err="failed to get container status \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": rpc error: code = NotFound desc = could not find container \"f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398\": container with ID starting with f6d9cfe07bd3ff7c43cd18e67aea2f125125da071e029242160880530acfe398 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815152 5010 scope.go:117] "RemoveContainer" containerID="640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815679 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec"} err="failed to get container status \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": rpc error: code = NotFound desc = could not find container \"640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec\": container with ID starting with 640c72c508bfbc05c6361dba6a2ae9df9990444a75b1a6429705c0602819c0ec not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.815705 5010 scope.go:117] "RemoveContainer" containerID="cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.816105 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81"} err="failed to get container status \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": rpc error: code = NotFound desc = could not find container \"cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81\": container with ID starting with cc80821dc2ec592df4774a1730f0a7ea7f7fda4a71441ea727bc7a0187ab3d81 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.816145 5010 scope.go:117] "RemoveContainer" containerID="e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.817011 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05"} err="failed to get container status \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": rpc error: code = NotFound desc = could not find container \"e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05\": container with ID starting with e33e65b72bb4264ffd955a8476f29bee0a28afc0a791bc776525354f23dd9d05 not found: ID does not exist" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911233 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911826 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911865 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.911901 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.930151 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.949965 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.966439 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.969583 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.972172 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.973250 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.974304 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:28:10 crc kubenswrapper[5010]: I0203 10:28:10.979396 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018322 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018392 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018439 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018478 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018635 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.018702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019061 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019167 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019283 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019324 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019384 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.019443 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.024425 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.024503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.027991 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.035447 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"nova-cell1-cell-mapping-fmn8g\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123812 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123895 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.123927 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124047 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124110 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124166 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124198 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.124332 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.126040 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.128534 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131134 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131321 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131573 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.131981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.132817 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.133692 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.146503 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"ceilometer-0\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.309075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.637789 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.853748 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:11 crc kubenswrapper[5010]: I0203 10:28:11.971277 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.363075 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.473960 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474016 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474135 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474227 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") pod \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\" (UID: \"341c8347-e47b-42c7-ace7-acb55f2b8c0f\") " Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.474906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs" (OuterVolumeSpecName: "logs") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.485236 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc" (OuterVolumeSpecName: "kube-api-access-lfbmc") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "kube-api-access-lfbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.528853 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.537964 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124e7652-b5a0-4a37-af4e-03b4585b6d71" path="/var/lib/kubelet/pods/124e7652-b5a0-4a37-af4e-03b4585b6d71/volumes" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.546798 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data" (OuterVolumeSpecName: "config-data") pod "341c8347-e47b-42c7-ace7-acb55f2b8c0f" (UID: "341c8347-e47b-42c7-ace7-acb55f2b8c0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578066 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578101 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfbmc\" (UniqueName: \"kubernetes.io/projected/341c8347-e47b-42c7-ace7-acb55f2b8c0f-kube-api-access-lfbmc\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578111 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/341c8347-e47b-42c7-ace7-acb55f2b8c0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.578121 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/341c8347-e47b-42c7-ace7-acb55f2b8c0f-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579131 5010 generic.go:334] "Generic (PLEG): container finished" podID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" exitCode=0 Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579639 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579685 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"341c8347-e47b-42c7-ace7-acb55f2b8c0f","Type":"ContainerDied","Data":"c47f6676aaf9cff804c2a71888dc81341a699bfd049b92c645db6bd9367bad06"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579709 5010 scope.go:117] "RemoveContainer" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.579907 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.589626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerStarted","Data":"79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.589694 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerStarted","Data":"5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.600092 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"4d55ccaf8e8fbc23ae8d8fcb578bf3c1e898e367f6ccb3f3993272add85d622a"} Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.763568 5010 scope.go:117] "RemoveContainer" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.792685 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fmn8g" podStartSLOduration=2.7926529970000002 podStartE2EDuration="2.792652997s" podCreationTimestamp="2026-02-03 10:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:12.625583894 +0000 UTC m=+1562.781560033" watchObservedRunningTime="2026-02-03 10:28:12.792652997 +0000 UTC m=+1562.948629126" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.838576 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.865372 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.865458 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.866037 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866054 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.866097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866103 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866318 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-api" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.866335 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" containerName="nova-api-log" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.867545 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.872509 5010 scope.go:117] "RemoveContainer" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.873642 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.873705 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.874083 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.876004 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.879297 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": container with ID starting with af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb not found: ID does not exist" containerID="af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.879358 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb"} err="failed to get container status \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": rpc error: code = NotFound desc = could not find container \"af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb\": container with ID starting with af275596b9860484c5fd55bdd2d8a0fa34ae82a578116d42125ae9f9d6be8cfb not found: ID does not exist" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.879389 5010 scope.go:117] "RemoveContainer" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: E0203 10:28:12.885107 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": container with ID starting with 28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6 not found: ID does not exist" containerID="28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885182 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6"} err="failed to get container status \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": rpc error: code = NotFound desc = could not find container \"28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6\": container with ID starting with 28b355b9cad67a2ac628fda655f008b4e7b4012e343a56faf3aa1be2ca28e7f6 not found: ID does not exist" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885755 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.885943 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886398 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.886790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989120 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989629 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989726 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.989848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990237 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.990719 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.998435 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:12 crc kubenswrapper[5010]: I0203 10:28:12.998677 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.000159 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.000781 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.018509 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"nova-api-0\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.273009 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.618681 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} Feb 03 10:28:13 crc kubenswrapper[5010]: I0203 10:28:13.876422 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.600207 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341c8347-e47b-42c7-ace7-acb55f2b8c0f" path="/var/lib/kubelet/pods/341c8347-e47b-42c7-ace7-acb55f2b8c0f/volumes" Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.661452 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} Feb 03 10:28:14 crc kubenswrapper[5010]: I0203 10:28:14.661510 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"4ba4db9ad461a1c8c1413d0c4001a20f6f253c1f4411549548ef5cb960e4f2f8"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.675795 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.676115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.678621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerStarted","Data":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.707384 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.707362629 podStartE2EDuration="3.707362629s" podCreationTimestamp="2026-02-03 10:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:15.698154395 +0000 UTC m=+1565.854130524" watchObservedRunningTime="2026-02-03 10:28:15.707362629 +0000 UTC m=+1565.863338758" Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.813520 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.906058 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:15 crc kubenswrapper[5010]: I0203 10:28:15.906914 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" containerID="cri-o://023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" gracePeriod=10 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397696 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397749 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.397842 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.398617 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.398661 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" gracePeriod=600 Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.558642 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.586625 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712179 5010 generic.go:334] "Generic (PLEG): container finished" podID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" exitCode=0 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712373 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712393 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-x25nd" event={"ID":"55ad6744-8ba2-49c4-bf2c-986f85f40079","Type":"ContainerDied","Data":"7edb2d5b18afc723b6414cab56e64b2430add9e831d1db279a0d0981b7c44bb5"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.712432 5010 scope.go:117] "RemoveContainer" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.726814 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" exitCode=0 Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.727222 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1"} Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.728086 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.728338 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764233 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764450 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764488 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764538 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764597 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.764715 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") pod \"55ad6744-8ba2-49c4-bf2c-986f85f40079\" (UID: \"55ad6744-8ba2-49c4-bf2c-986f85f40079\") " Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.812663 5010 scope.go:117] "RemoveContainer" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.826533 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g" (OuterVolumeSpecName: "kube-api-access-vdv6g") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "kube-api-access-vdv6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.868035 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdv6g\" (UniqueName: \"kubernetes.io/projected/55ad6744-8ba2-49c4-bf2c-986f85f40079-kube-api-access-vdv6g\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.891874 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.907537 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config" (OuterVolumeSpecName: "config") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.937680 5010 scope.go:117] "RemoveContainer" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.938021 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.938076 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.941806 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": container with ID starting with 023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74 not found: ID does not exist" containerID="023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.941856 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74"} err="failed to get container status \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": rpc error: code = NotFound desc = could not find container \"023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74\": container with ID starting with 023ccca07b4778153919ff22e16137e430f4a07ca1b10115037a4543214f0c74 not found: ID does not exist" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.941892 5010 scope.go:117] "RemoveContainer" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: E0203 10:28:16.942508 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": container with ID starting with 1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33 not found: ID does not exist" containerID="1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.942556 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33"} err="failed to get container status \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": rpc error: code = NotFound desc = could not find container \"1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33\": container with ID starting with 1947217ed252755389b58ec73dafb5c0c5c7fbd1d7f80b6677ba6a66639adb33 not found: ID does not exist" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.942574 5010 scope.go:117] "RemoveContainer" containerID="feb6be59c5f60eb4fb5b49379a30e3d1c2e1212fd73c563908d470b35420da88" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.958942 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55ad6744-8ba2-49c4-bf2c-986f85f40079" (UID: "55ad6744-8ba2-49c4-bf2c-986f85f40079"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969504 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969540 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969553 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969562 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:16 crc kubenswrapper[5010]: I0203 10:28:16.969572 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55ad6744-8ba2-49c4-bf2c-986f85f40079-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:17 crc kubenswrapper[5010]: I0203 10:28:17.169163 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:17 crc kubenswrapper[5010]: I0203 10:28:17.187728 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-x25nd"] Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.515876 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" path="/var/lib/kubelet/pods/55ad6744-8ba2-49c4-bf2c-986f85f40079/volumes" Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754068 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerStarted","Data":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754182 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" containerID="cri-o://21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754247 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754266 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" containerID="cri-o://63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754296 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" containerID="cri-o://353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.754276 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" containerID="cri-o://4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" gracePeriod=30 Feb 03 10:28:18 crc kubenswrapper[5010]: I0203 10:28:18.800135 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.142478042 podStartE2EDuration="8.800116555s" podCreationTimestamp="2026-02-03 10:28:10 +0000 UTC" firstStartedPulling="2026-02-03 10:28:11.897394053 +0000 UTC m=+1562.053370182" lastFinishedPulling="2026-02-03 10:28:17.555032566 +0000 UTC m=+1567.711008695" observedRunningTime="2026-02-03 10:28:18.786955389 +0000 UTC m=+1568.942931518" watchObservedRunningTime="2026-02-03 10:28:18.800116555 +0000 UTC m=+1568.956092684" Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.766143 5010 generic.go:334] "Generic (PLEG): container finished" podID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerID="79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.766385 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerDied","Data":"79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770866 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770896 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" exitCode=2 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770905 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" exitCode=0 Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.770969 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.771067 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} Feb 03 10:28:19 crc kubenswrapper[5010]: I0203 10:28:19.771105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.249885 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308464 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308578 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308652 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.308831 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") pod \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\" (UID: \"900a4dd0-c8e2-4416-9a0e-8fff95a5053b\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.318670 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts" (OuterVolumeSpecName: "scripts") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.320573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74" (OuterVolumeSpecName: "kube-api-access-hhn74") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "kube-api-access-hhn74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.360709 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data" (OuterVolumeSpecName: "config-data") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.376909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900a4dd0-c8e2-4416-9a0e-8fff95a5053b" (UID: "900a4dd0-c8e2-4416-9a0e-8fff95a5053b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411412 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhn74\" (UniqueName: \"kubernetes.io/projected/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-kube-api-access-hhn74\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411478 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411501 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.411519 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900a4dd0-c8e2-4416-9a0e-8fff95a5053b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.527883 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.616934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617023 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617078 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617184 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617257 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617305 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.617612 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") pod \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\" (UID: \"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff\") " Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.619285 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.619640 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.622370 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8" (OuterVolumeSpecName: "kube-api-access-z5zz8") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "kube-api-access-z5zz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.625556 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts" (OuterVolumeSpecName: "scripts") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.646532 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.672058 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.696441 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.715688 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data" (OuterVolumeSpecName: "config-data") pod "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" (UID: "16b3cd8c-3ab7-4cb7-8add-fa14d782ddff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720282 5010 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720333 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5zz8\" (UniqueName: \"kubernetes.io/projected/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-kube-api-access-z5zz8\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720345 5010 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720354 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720365 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720378 5010 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720390 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.720402 5010 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798827 5010 generic.go:334] "Generic (PLEG): container finished" podID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" exitCode=0 Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798943 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.799003 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16b3cd8c-3ab7-4cb7-8add-fa14d782ddff","Type":"ContainerDied","Data":"4d55ccaf8e8fbc23ae8d8fcb578bf3c1e898e367f6ccb3f3993272add85d622a"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.799029 5010 scope.go:117] "RemoveContainer" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.798955 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804636 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fmn8g" event={"ID":"900a4dd0-c8e2-4416-9a0e-8fff95a5053b","Type":"ContainerDied","Data":"5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4"} Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804685 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e355931a7d8aee1e5fce1e85e08f90a6fc5e4e40c3b64d40ecde61b241ba2a4" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.804746 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fmn8g" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.847809 5010 scope.go:117] "RemoveContainer" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.874234 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.894653 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.897075 5010 scope.go:117] "RemoveContainer" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.904849 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905582 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905607 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905627 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905639 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905652 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905661 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905672 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="init" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905679 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="init" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905690 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905696 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905769 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905778 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: E0203 10:28:21.905790 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.905796 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906055 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-notification-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906086 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="sg-core" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906111 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" containerName="nova-manage" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906124 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="ceilometer-central-agent" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906144 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ad6744-8ba2-49c4-bf2c-986f85f40079" containerName="dnsmasq-dns" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.906157 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" containerName="proxy-httpd" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.909277 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.912955 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.915923 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.916312 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.917609 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933131 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933161 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933386 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933473 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933627 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933724 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.933769 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:21 crc kubenswrapper[5010]: I0203 10:28:21.954245 5010 scope.go:117] "RemoveContainer" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.036075 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.036809 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" containerID="cri-o://f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037360 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" containerID="cri-o://4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037415 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037487 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037517 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037593 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037658 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037689 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037744 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.037782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.038715 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-run-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.039018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe58e747-c39e-4370-93bc-f72f8c5ee95a-log-httpd\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.045102 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.045102 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.046529 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.051982 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-scripts\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.052688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe58e747-c39e-4370-93bc-f72f8c5ee95a-config-data\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.053896 5010 scope.go:117] "RemoveContainer" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.054480 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": container with ID starting with 63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48 not found: ID does not exist" containerID="63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.054521 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48"} err="failed to get container status \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": rpc error: code = NotFound desc = could not find container \"63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48\": container with ID starting with 63c385d253f7cfc5e116f8a4400315223d92158a58c76f77465218ba5297ea48 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.054550 5010 scope.go:117] "RemoveContainer" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.055790 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": container with ID starting with 353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363 not found: ID does not exist" containerID="353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.055843 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363"} err="failed to get container status \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": rpc error: code = NotFound desc = could not find container \"353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363\": container with ID starting with 353a2008fc1c63b34785472002d8e9e03a99c185222b5cedda46c86de0b31363 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.055881 5010 scope.go:117] "RemoveContainer" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.057504 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": container with ID starting with 4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276 not found: ID does not exist" containerID="4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.057587 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276"} err="failed to get container status \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": rpc error: code = NotFound desc = could not find container \"4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276\": container with ID starting with 4fc725559e3149530687de842237e4428da86034d95d146b1dc951a28d688276 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.057650 5010 scope.go:117] "RemoveContainer" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.065929 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.066363 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" containerID="cri-o://3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.067389 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": container with ID starting with 21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e not found: ID does not exist" containerID="21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.067452 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e"} err="failed to get container status \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": rpc error: code = NotFound desc = could not find container \"21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e\": container with ID starting with 21fed0c3582c2af0c63bad6996ff877bac5c3b1b56aeb054842d0cb45399564e not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.068973 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd872\" (UniqueName: \"kubernetes.io/projected/fe58e747-c39e-4370-93bc-f72f8c5ee95a-kube-api-access-cd872\") pod \"ceilometer-0\" (UID: \"fe58e747-c39e-4370-93bc-f72f8c5ee95a\") " pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.135283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.136021 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" containerID="cri-o://a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.136347 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" containerID="cri-o://70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" gracePeriod=30 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.293835 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.577031 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16b3cd8c-3ab7-4cb7-8add-fa14d782ddff" path="/var/lib/kubelet/pods/16b3cd8c-3ab7-4cb7-8add-fa14d782ddff/volumes" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.689422 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812149 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812307 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812348 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812396 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812451 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.812544 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") pod \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\" (UID: \"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b\") " Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.814458 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs" (OuterVolumeSpecName: "logs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.822402 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg" (OuterVolumeSpecName: "kube-api-access-slhxg") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "kube-api-access-slhxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841115 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841287 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.841724 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.840990 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" exitCode=0 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842143 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" exitCode=143 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842279 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.842414 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b","Type":"ContainerDied","Data":"4ba4db9ad461a1c8c1413d0c4001a20f6f253c1f4411549548ef5cb960e4f2f8"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.854539 5010 generic.go:334] "Generic (PLEG): container finished" podID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerID="70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" exitCode=143 Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.854598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e"} Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.856862 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.879607 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.884992 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data" (OuterVolumeSpecName: "config-data") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.885625 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.902588 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" (UID: "1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.904506 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.906020 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906110 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} err="failed to get container status \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906147 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: E0203 10:28:22.906482 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906517 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} err="failed to get container status \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906538 5010 scope.go:117] "RemoveContainer" containerID="4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906860 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666"} err="failed to get container status \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": rpc error: code = NotFound desc = could not find container \"4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666\": container with ID starting with 4aae18ffaab54aa324fb5ff6ee8a6d15f626d0891f6c39347e320d8ddf905666 not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.906887 5010 scope.go:117] "RemoveContainer" containerID="f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.907151 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c"} err="failed to get container status \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": rpc error: code = NotFound desc = could not find container \"f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c\": container with ID starting with f39494cdaf21ca481ead70286e1f51940d44bfb088b8e4c8b193a6a39318905c not found: ID does not exist" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916622 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916676 5010 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916691 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhxg\" (UniqueName: \"kubernetes.io/projected/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-kube-api-access-slhxg\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916703 5010 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916725 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:22 crc kubenswrapper[5010]: I0203 10:28:22.916740 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.035858 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: W0203 10:28:23.037362 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe58e747_c39e_4370_93bc_f72f8c5ee95a.slice/crio-66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080 WatchSource:0}: Error finding container 66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080: Status 404 returned error can't find the container with id 66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080 Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.179026 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.197067 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.221814 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.222449 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222472 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.222517 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222525 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222744 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-log" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.222782 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" containerName="nova-api-api" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.224249 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.230816 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.235834 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.238429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.239451 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.274268 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.283509 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.286387 5010 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 10:28:23 crc kubenswrapper[5010]: E0203 10:28:23.286453 5010 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.327954 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328024 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328118 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328401 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.328871 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439280 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439401 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439455 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439478 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439512 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.439537 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.440827 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aba2689d-cd13-4601-ac45-69409c411839-logs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.455312 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-public-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.462896 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn84h\" (UniqueName: \"kubernetes.io/projected/aba2689d-cd13-4601-ac45-69409c411839-kube-api-access-sn84h\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.465023 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.465196 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.487394 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aba2689d-cd13-4601-ac45-69409c411839-config-data\") pod \"nova-api-0\" (UID: \"aba2689d-cd13-4601-ac45-69409c411839\") " pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.570348 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 10:28:23 crc kubenswrapper[5010]: I0203 10:28:23.882677 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"66cb3129fc03dffd78ff3ec6bfe9112c6f1b13c3583329999e822cc839867080"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.536483 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b" path="/var/lib/kubelet/pods/1c7ae2ce-1db2-4079-80ef-2e2fdc0b785b/volumes" Feb 03 10:28:24 crc kubenswrapper[5010]: W0203 10:28:24.582607 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba2689d_cd13_4601_ac45_69409c411839.slice/crio-96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2 WatchSource:0}: Error finding container 96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2: Status 404 returned error can't find the container with id 96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2 Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.591143 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.903233 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"3fbbf425d6a8ae69a735e172d0f5fc3d55f7bb760d5fa7d006ec36b95d816215"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.905872 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"bada7cce176643549ba1bc1cf410273f97a38e5aef52492efb83cb84621b729d"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.905944 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"96678b45cdfbb1ead44162e62acb7726902eb1ffd62d471a3b3d56338399f5b2"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.911178 5010 generic.go:334] "Generic (PLEG): container finished" podID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" exitCode=0 Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.911344 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerDied","Data":"3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3"} Feb 03 10:28:24 crc kubenswrapper[5010]: I0203 10:28:24.922971 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.085478 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.086294 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.086444 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") pod \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\" (UID: \"a2d836d0-d303-41ca-9c8b-f714d6a4e76c\") " Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.094718 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss" (OuterVolumeSpecName: "kube-api-access-6chss") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "kube-api-access-6chss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.127415 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.127471 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data" (OuterVolumeSpecName: "config-data") pod "a2d836d0-d303-41ca-9c8b-f714d6a4e76c" (UID: "a2d836d0-d303-41ca-9c8b-f714d6a4e76c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189550 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189585 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.189597 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chss\" (UniqueName: \"kubernetes.io/projected/a2d836d0-d303-41ca-9c8b-f714d6a4e76c-kube-api-access-6chss\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.710417 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:51018->10.217.0.192:8775: read: connection reset by peer" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.710455 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:51030->10.217.0.192:8775: read: connection reset by peer" Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.990472 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"b673fa7a85d4061e235a60332d266e4ae0d06383842372e25f038dfe5add4f5b"} Feb 03 10:28:25 crc kubenswrapper[5010]: I0203 10:28:25.990910 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"30a71c784b7a9ba1d4aaa61721c0c5204c9023396080748a48ec3a5135045f10"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.011738 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aba2689d-cd13-4601-ac45-69409c411839","Type":"ContainerStarted","Data":"983a5c24c4d341cce56231a45d3dc293050162227992ac74a4484151faa42ffe"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017367 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a2d836d0-d303-41ca-9c8b-f714d6a4e76c","Type":"ContainerDied","Data":"58f162aa3d6e537665ac2963288a9914168137aa741e22132f9fea00cc29574c"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017432 5010 scope.go:117] "RemoveContainer" containerID="3b3e32798695ef193d14b863df180f74f04391661ad55526322e40cae223bae3" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.017709 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.023089 5010 generic.go:334] "Generic (PLEG): container finished" podID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerID="a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" exitCode=0 Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.023138 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e"} Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.088143 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.088116648 podStartE2EDuration="3.088116648s" podCreationTimestamp="2026-02-03 10:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:26.043874869 +0000 UTC m=+1576.199851018" watchObservedRunningTime="2026-02-03 10:28:26.088116648 +0000 UTC m=+1576.244092777" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.119672 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.136139 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.149409 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153423 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153796 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153807 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153823 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153829 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: E0203 10:28:26.153843 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.153851 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154046 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-log" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154062 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" containerName="nova-metadata-metadata" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.154073 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" containerName="nova-scheduler-scheduler" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.156620 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.159421 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.169151 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231400 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231720 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.231823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232172 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") pod \"4c43ac79-0458-4b95-a9fd-26bc038c195b\" (UID: \"4c43ac79-0458-4b95-a9fd-26bc038c195b\") " Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232711 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232862 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.232924 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.249092 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs" (OuterVolumeSpecName: "logs") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.277481 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb" (OuterVolumeSpecName: "kube-api-access-9bnxb") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "kube-api-access-9bnxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.305682 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.316754 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335009 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335173 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335247 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bnxb\" (UniqueName: \"kubernetes.io/projected/4c43ac79-0458-4b95-a9fd-26bc038c195b-kube-api-access-9bnxb\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335261 5010 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c43ac79-0458-4b95-a9fd-26bc038c195b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335272 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.335284 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.344119 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.345720 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28559aae-4731-4653-a466-8c6f5c6c7dcf-config-data\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.356431 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data" (OuterVolumeSpecName: "config-data") pod "4c43ac79-0458-4b95-a9fd-26bc038c195b" (UID: "4c43ac79-0458-4b95-a9fd-26bc038c195b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.358009 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrm6k\" (UniqueName: \"kubernetes.io/projected/28559aae-4731-4653-a466-8c6f5c6c7dcf-kube-api-access-vrm6k\") pod \"nova-scheduler-0\" (UID: \"28559aae-4731-4653-a466-8c6f5c6c7dcf\") " pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.436705 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.437946 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c43ac79-0458-4b95-a9fd-26bc038c195b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.514288 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d836d0-d303-41ca-9c8b-f714d6a4e76c" path="/var/lib/kubelet/pods/a2d836d0-d303-41ca-9c8b-f714d6a4e76c/volumes" Feb 03 10:28:26 crc kubenswrapper[5010]: I0203 10:28:26.966749 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 10:28:26 crc kubenswrapper[5010]: W0203 10:28:26.977603 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28559aae_4731_4653_a466_8c6f5c6c7dcf.slice/crio-2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c WatchSource:0}: Error finding container 2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c: Status 404 returned error can't find the container with id 2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.033744 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28559aae-4731-4653-a466-8c6f5c6c7dcf","Type":"ContainerStarted","Data":"2ed910c827770743af4ba77485f94924ae732d9b7ebf8412c33571e414d0961c"} Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.038529 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.038997 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c43ac79-0458-4b95-a9fd-26bc038c195b","Type":"ContainerDied","Data":"d8c29f4fa62c3f6d24562331b8a0ba99f0c35f78468e992ff282bcdb95f55c82"} Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.039027 5010 scope.go:117] "RemoveContainer" containerID="a78044c6ee003f2a2c2b9afaa9ab8fb12ae812a98e2ee39a42b2fc304776640e" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.096440 5010 scope.go:117] "RemoveContainer" containerID="70f58e247699be77808ee32bd051173d13561654851dcea2d20478da52e6150e" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.112267 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.129971 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.145796 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.147426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.153805 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.154094 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.230724 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.257909 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.257993 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258027 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258075 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.258112 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360846 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360917 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.360981 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.361052 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.361749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edaaf3a7-a254-4a29-875a-643e46308f33-logs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.367687 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.371101 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-config-data\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.371813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edaaf3a7-a254-4a29-875a-643e46308f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.382822 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4tq\" (UniqueName: \"kubernetes.io/projected/edaaf3a7-a254-4a29-875a-643e46308f33-kube-api-access-nk4tq\") pod \"nova-metadata-0\" (UID: \"edaaf3a7-a254-4a29-875a-643e46308f33\") " pod="openstack/nova-metadata-0" Feb 03 10:28:27 crc kubenswrapper[5010]: I0203 10:28:27.621495 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.048949 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28559aae-4731-4653-a466-8c6f5c6c7dcf","Type":"ContainerStarted","Data":"13ad0ac55357133529dbef7213e34a9655d73d32b0305d790f3ed0e0bc454043"} Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.051364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe58e747-c39e-4370-93bc-f72f8c5ee95a","Type":"ContainerStarted","Data":"7dfe01dd5b5df071335a047adc00fd893f119b00593473ed5caf709c9b6193a5"} Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.051601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.071325 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.071307422 podStartE2EDuration="2.071307422s" podCreationTimestamp="2026-02-03 10:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:28.070562273 +0000 UTC m=+1578.226538412" watchObservedRunningTime="2026-02-03 10:28:28.071307422 +0000 UTC m=+1578.227283551" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.098187 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.545258234 podStartE2EDuration="7.098165697s" podCreationTimestamp="2026-02-03 10:28:21 +0000 UTC" firstStartedPulling="2026-02-03 10:28:23.041144411 +0000 UTC m=+1573.197120540" lastFinishedPulling="2026-02-03 10:28:27.594051874 +0000 UTC m=+1577.750028003" observedRunningTime="2026-02-03 10:28:28.092602216 +0000 UTC m=+1578.248578345" watchObservedRunningTime="2026-02-03 10:28:28.098165697 +0000 UTC m=+1578.254141816" Feb 03 10:28:28 crc kubenswrapper[5010]: W0203 10:28:28.139057 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedaaf3a7_a254_4a29_875a_643e46308f33.slice/crio-9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b WatchSource:0}: Error finding container 9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b: Status 404 returned error can't find the container with id 9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.154882 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.503772 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:28 crc kubenswrapper[5010]: E0203 10:28:28.504496 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:28 crc kubenswrapper[5010]: I0203 10:28:28.522247 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c43ac79-0458-4b95-a9fd-26bc038c195b" path="/var/lib/kubelet/pods/4c43ac79-0458-4b95-a9fd-26bc038c195b/volumes" Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082022 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"34dd5978c6ddc33c553961ffbbc90db6cb3ce288fd9e042a9a3a0ee007729c5e"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082080 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"76f162e7ffb118a37fd9f58b414f239a530d0e86b8704d45fb9a481cedb91f2c"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.082098 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"edaaf3a7-a254-4a29-875a-643e46308f33","Type":"ContainerStarted","Data":"9a7d436f32cd314ad8bbb1fc0c1318b84815558ee4edee486c8a74bfc949d94b"} Feb 03 10:28:29 crc kubenswrapper[5010]: I0203 10:28:29.112320 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.112299334 podStartE2EDuration="2.112299334s" podCreationTimestamp="2026-02-03 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:28:29.100978645 +0000 UTC m=+1579.256954784" watchObservedRunningTime="2026-02-03 10:28:29.112299334 +0000 UTC m=+1579.268275473" Feb 03 10:28:31 crc kubenswrapper[5010]: I0203 10:28:31.438707 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 10:28:32 crc kubenswrapper[5010]: I0203 10:28:32.621974 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:28:32 crc kubenswrapper[5010]: I0203 10:28:32.622093 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 10:28:33 crc kubenswrapper[5010]: I0203 10:28:33.571431 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:28:33 crc kubenswrapper[5010]: I0203 10:28:33.571556 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.585465 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aba2689d-cd13-4601-ac45-69409c411839" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.585621 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aba2689d-cd13-4601-ac45-69409c411839" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.778786 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.781011 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.788671 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.918967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.919029 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:34 crc kubenswrapper[5010]: I0203 10:28:34.919061 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021039 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021096 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021126 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021748 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.021797 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.051486 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"redhat-operators-sg4lc\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.107602 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:35 crc kubenswrapper[5010]: I0203 10:28:35.637838 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165661 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" exitCode=0 Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4"} Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.165754 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"e1eaf28060cc636ff36317d3b149bb856ce747051158d19fd1ca2f7260aa8e45"} Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.438164 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 10:28:36 crc kubenswrapper[5010]: I0203 10:28:36.467549 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.178399 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.219848 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.622052 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:28:37 crc kubenswrapper[5010]: I0203 10:28:37.622101 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.193547 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" exitCode=0 Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.193607 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.637430 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edaaf3a7-a254-4a29-875a-643e46308f33" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:38 crc kubenswrapper[5010]: I0203 10:28:38.638079 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="edaaf3a7-a254-4a29-875a-643e46308f33" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 10:28:39 crc kubenswrapper[5010]: I0203 10:28:39.208455 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerStarted","Data":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} Feb 03 10:28:39 crc kubenswrapper[5010]: I0203 10:28:39.355781 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sg4lc" podStartSLOduration=2.951509091 podStartE2EDuration="5.35575078s" podCreationTimestamp="2026-02-03 10:28:34 +0000 UTC" firstStartedPulling="2026-02-03 10:28:36.16762433 +0000 UTC m=+1586.323600459" lastFinishedPulling="2026-02-03 10:28:38.571866019 +0000 UTC m=+1588.727842148" observedRunningTime="2026-02-03 10:28:39.328051343 +0000 UTC m=+1589.484027472" watchObservedRunningTime="2026-02-03 10:28:39.35575078 +0000 UTC m=+1589.511726919" Feb 03 10:28:41 crc kubenswrapper[5010]: I0203 10:28:41.502936 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:41 crc kubenswrapper[5010]: E0203 10:28:41.503921 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.577661 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.578136 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.579896 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 10:28:43 crc kubenswrapper[5010]: I0203 10:28:43.584570 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:44 crc kubenswrapper[5010]: I0203 10:28:44.250448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 10:28:44 crc kubenswrapper[5010]: I0203 10:28:44.256740 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.108975 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.110325 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.164017 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.306762 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:45 crc kubenswrapper[5010]: I0203 10:28:45.396962 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:47 crc kubenswrapper[5010]: I0203 10:28:47.276788 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sg4lc" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" containerID="cri-o://3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" gracePeriod=2 Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.630999 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.632350 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.648719 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:47.856735 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009129 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009252 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.009344 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") pod \"5185b2c5-d115-4546-afcf-bc17a00a6cda\" (UID: \"5185b2c5-d115-4546-afcf-bc17a00a6cda\") " Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.011590 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities" (OuterVolumeSpecName: "utilities") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.028109 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52" (OuterVolumeSpecName: "kube-api-access-lqw52") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "kube-api-access-lqw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.112662 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqw52\" (UniqueName: \"kubernetes.io/projected/5185b2c5-d115-4546-afcf-bc17a00a6cda-kube-api-access-lqw52\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.112711 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.165741 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5185b2c5-d115-4546-afcf-bc17a00a6cda" (UID: "5185b2c5-d115-4546-afcf-bc17a00a6cda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.214548 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5185b2c5-d115-4546-afcf-bc17a00a6cda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289286 5010 generic.go:334] "Generic (PLEG): container finished" podID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" exitCode=0 Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289420 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sg4lc" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.289436 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.291886 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sg4lc" event={"ID":"5185b2c5-d115-4546-afcf-bc17a00a6cda","Type":"ContainerDied","Data":"e1eaf28060cc636ff36317d3b149bb856ce747051158d19fd1ca2f7260aa8e45"} Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.291957 5010 scope.go:117] "RemoveContainer" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.297761 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.332650 5010 scope.go:117] "RemoveContainer" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.364834 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.374796 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sg4lc"] Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.379736 5010 scope.go:117] "RemoveContainer" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.433930 5010 scope.go:117] "RemoveContainer" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.434764 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": container with ID starting with 3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee not found: ID does not exist" containerID="3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.434834 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee"} err="failed to get container status \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": rpc error: code = NotFound desc = could not find container \"3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee\": container with ID starting with 3d14f7954905dfe08dbb7e401dfd3febefca605e762c64912a99c848d50c32ee not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.434875 5010 scope.go:117] "RemoveContainer" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.435635 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": container with ID starting with 856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb not found: ID does not exist" containerID="856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.435693 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb"} err="failed to get container status \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": rpc error: code = NotFound desc = could not find container \"856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb\": container with ID starting with 856eada0db222e1896dcc1f7b3ea89a80e570a61b2928200b50eca62149213eb not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.435735 5010 scope.go:117] "RemoveContainer" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: E0203 10:28:48.436415 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": container with ID starting with a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4 not found: ID does not exist" containerID="a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.436489 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4"} err="failed to get container status \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": rpc error: code = NotFound desc = could not find container \"a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4\": container with ID starting with a872397b7968be8c4ffd262a8deea4f4c66a360b3a087a92e88a40e32c031cf4 not found: ID does not exist" Feb 03 10:28:48 crc kubenswrapper[5010]: I0203 10:28:48.516887 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" path="/var/lib/kubelet/pods/5185b2c5-d115-4546-afcf-bc17a00a6cda/volumes" Feb 03 10:28:52 crc kubenswrapper[5010]: I0203 10:28:52.313148 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 10:28:56 crc kubenswrapper[5010]: I0203 10:28:56.502088 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:28:56 crc kubenswrapper[5010]: E0203 10:28:56.502875 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:02 crc kubenswrapper[5010]: I0203 10:29:02.626430 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:03 crc kubenswrapper[5010]: I0203 10:29:03.629645 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:07 crc kubenswrapper[5010]: I0203 10:29:07.061679 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" containerID="cri-o://602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" gracePeriod=604796 Feb 03 10:29:07 crc kubenswrapper[5010]: I0203 10:29:07.907264 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" containerID="cri-o://e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" gracePeriod=604796 Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.036972 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.503071 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:08 crc kubenswrapper[5010]: E0203 10:29:08.503530 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:08 crc kubenswrapper[5010]: I0203 10:29:08.617730 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.576954 5010 generic.go:334] "Generic (PLEG): container finished" podID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerID="602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" exitCode=0 Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.577161 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620"} Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.832874 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851688 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851749 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851781 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851811 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851852 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851934 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.851986 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852034 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852123 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.852162 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") pod \"2ce83ed2-cbef-4045-8822-6f58268b28b3\" (UID: \"2ce83ed2-cbef-4045-8822-6f58268b28b3\") " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.853484 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.854325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.854361 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.858701 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.858759 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.859414 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd" (OuterVolumeSpecName: "kube-api-access-m5rwd") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "kube-api-access-m5rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.865892 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.882136 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.909711 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data" (OuterVolumeSpecName: "config-data") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.952255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955692 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955811 5010 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.955914 5010 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ce83ed2-cbef-4045-8822-6f58268b28b3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956020 5010 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ce83ed2-cbef-4045-8822-6f58268b28b3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956171 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956247 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956322 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956406 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956482 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5rwd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-kube-api-access-m5rwd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.956556 5010 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ce83ed2-cbef-4045-8822-6f58268b28b3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:13 crc kubenswrapper[5010]: I0203 10:29:13.994632 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.010926 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ce83ed2-cbef-4045-8822-6f58268b28b3" (UID: "2ce83ed2-cbef-4045-8822-6f58268b28b3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.061715 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ce83ed2-cbef-4045-8822-6f58268b28b3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.061754 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.529944 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581107 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581168 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.581202 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582405 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582440 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582473 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582507 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582590 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582666 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.582692 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\" (UID: \"f2066c8b-8b89-4dcb-972d-aea4dcd1c105\") " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.584999 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.587073 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.592131 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.593332 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598243 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl" (OuterVolumeSpecName: "kube-api-access-qkwkl") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "kube-api-access-qkwkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.598881 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info" (OuterVolumeSpecName: "pod-info") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.600407 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617750 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ce83ed2-cbef-4045-8822-6f58268b28b3","Type":"ContainerDied","Data":"97cdcebe285a4f7a484868c96029b1b0d97151d7f63016f73836ed870ad4197d"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617821 5010 scope.go:117] "RemoveContainer" containerID="602c03e894fa88a9b33161b23751551ae10019029e054f5933d29cf4949f0620" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.617994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634502 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" exitCode=0 Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634555 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f2066c8b-8b89-4dcb-972d-aea4dcd1c105","Type":"ContainerDied","Data":"6f662c0876b2bb6a1a91c65ab1f7cf8a34f9b5b27a5996afb9426d7a8621423b"} Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.634661 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.640023 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data" (OuterVolumeSpecName: "config-data") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685180 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685259 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685274 5010 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685285 5010 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685307 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685317 5010 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685327 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkwkl\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-kube-api-access-qkwkl\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685335 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.685344 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.707975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf" (OuterVolumeSpecName: "server-conf") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.717691 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.736407 5010 scope.go:117] "RemoveContainer" containerID="10e7a7e1923769d25869f1642046743d27038f14081a9edd79e0d2a9d1c7d095" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.747795 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.765012 5010 scope.go:117] "RemoveContainer" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.765033 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f2066c8b-8b89-4dcb-972d-aea4dcd1c105" (UID: "f2066c8b-8b89-4dcb-972d-aea4dcd1c105"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.777037 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787230 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787267 5010 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.787279 5010 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f2066c8b-8b89-4dcb-972d-aea4dcd1c105-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.803754 5010 scope.go:117] "RemoveContainer" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812419 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812863 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812879 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812890 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-utilities" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812897 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-utilities" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812915 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812921 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812941 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812947 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812955 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812961 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="setup-container" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.812981 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.812986 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.813006 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-content" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813012 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="extract-content" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813194 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813234 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5185b2c5-d115-4546-afcf-bc17a00a6cda" containerName="registry-server" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.813246 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" containerName="rabbitmq" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.814836 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.817964 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827538 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827739 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827837 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.827918 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.828047 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9nfm9" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.828532 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.841328 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.858804 5010 scope.go:117] "RemoveContainer" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.859264 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": container with ID starting with e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89 not found: ID does not exist" containerID="e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859309 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89"} err="failed to get container status \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": rpc error: code = NotFound desc = could not find container \"e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89\": container with ID starting with e7b324754363c2f3c9935cf7390dc333d18407cc19a03ceb47012bc05ac0af89 not found: ID does not exist" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859334 5010 scope.go:117] "RemoveContainer" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: E0203 10:29:14.859599 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": container with ID starting with 35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae not found: ID does not exist" containerID="35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.859628 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae"} err="failed to get container status \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": rpc error: code = NotFound desc = could not find container \"35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae\": container with ID starting with 35eaa2b360c11ef3168d683fc2f67400b01f08b1d9f58aea46291a308a02faae not found: ID does not exist" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.888944 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889003 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889030 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889062 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889334 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889457 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889587 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889699 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889729 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889778 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.889919 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.974738 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.997793 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998165 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998289 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998332 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998474 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998602 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998752 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.998848 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999003 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999140 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999171 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999222 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:14 crc kubenswrapper[5010]: I0203 10:29:14.999740 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000069 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-config-data\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000197 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-server-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.000479 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.001037 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543f315d-d2f8-497f-a2c1-1a929c1611be-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.007359 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.007359 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543f315d-d2f8-497f-a2c1-1a929c1611be-pod-info\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.013145 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543f315d-d2f8-497f-a2c1-1a929c1611be-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.013389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.019406 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.022691 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.025960 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.026121 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.026263 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ld7g9" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.027589 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.027860 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.028022 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.028204 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.033911 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.056285 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.056952 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn2t\" (UniqueName: \"kubernetes.io/projected/543f315d-d2f8-497f-a2c1-1a929c1611be-kube-api-access-nfn2t\") pod \"rabbitmq-server-0\" (UID: \"543f315d-d2f8-497f-a2c1-1a929c1611be\") " pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.101740 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.101945 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102245 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102344 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102547 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102654 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.102921 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103017 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103099 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103170 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.103244 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.134857 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.239781 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.239955 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240015 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240141 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240287 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240503 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240576 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240643 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240698 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240772 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.240878 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.245739 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.245957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246356 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246631 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.246831 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.247854 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.263950 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.266057 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.267035 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.274957 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.282726 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf265\" (UniqueName: \"kubernetes.io/projected/9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf-kube-api-access-pf265\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.324457 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.347172 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.768807 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: I0203 10:29:15.920020 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 10:29:15 crc kubenswrapper[5010]: W0203 10:29:15.925662 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9044f36b_9c2b_47bf_b1a3_46c14c6ec5cf.slice/crio-6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476 WatchSource:0}: Error finding container 6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476: Status 404 returned error can't find the container with id 6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476 Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.043923 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.045513 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.050918 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.069752 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169480 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169663 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169770 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.169964 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.170010 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.170043 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272897 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272943 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.272989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273046 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.273156 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274645 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274664 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.274790 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275179 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275208 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.275432 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.293730 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"dnsmasq-dns-79bd4cc8c9-mjf7k\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.373266 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.558314 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce83ed2-cbef-4045-8822-6f58268b28b3" path="/var/lib/kubelet/pods/2ce83ed2-cbef-4045-8822-6f58268b28b3/volumes" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.559613 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2066c8b-8b89-4dcb-972d-aea4dcd1c105" path="/var/lib/kubelet/pods/f2066c8b-8b89-4dcb-972d-aea4dcd1c105/volumes" Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.667626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"fcef8f75e389407c1f346ac05d9ab158ea83bf4db6071355624db725d02f0e9c"} Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.669297 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"6181ea4de4a405350e47624cb8c31335ee5fc8611261f4795045fa244338c476"} Feb 03 10:29:16 crc kubenswrapper[5010]: I0203 10:29:16.932184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.685499 5010 generic.go:334] "Generic (PLEG): container finished" podID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" exitCode=0 Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.685890 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d"} Feb 03 10:29:17 crc kubenswrapper[5010]: I0203 10:29:17.686777 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerStarted","Data":"b90baa1d4d9f0ddbc89dd4b10b55aff56a1978d65ba73c5d42c76702253705b7"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.696882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.702083 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.705551 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerStarted","Data":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.705703 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:18 crc kubenswrapper[5010]: I0203 10:29:18.746543 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" podStartSLOduration=2.746527847 podStartE2EDuration="2.746527847s" podCreationTimestamp="2026-02-03 10:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:29:18.744311021 +0000 UTC m=+1628.900287170" watchObservedRunningTime="2026-02-03 10:29:18.746527847 +0000 UTC m=+1628.902503976" Feb 03 10:29:21 crc kubenswrapper[5010]: I0203 10:29:21.502919 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:21 crc kubenswrapper[5010]: E0203 10:29:21.503824 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.375353 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.465381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.466426 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" containerID="cri-o://e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" gracePeriod=10 Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.683086 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.687369 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.698364 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.729828 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730247 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730358 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730485 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730602 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730814 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.730994 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.790345 5010 generic.go:334] "Generic (PLEG): container finished" podID="112eb3e9-cf11-4513-be2d-53a42670413e" containerID="e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" exitCode=0 Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.790389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514"} Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832460 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832523 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832574 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832734 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832807 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.832841 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.833731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-config\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.834635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835115 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835123 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-svc\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.835376 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.837378 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:26 crc kubenswrapper[5010]: I0203 10:29:26.853890 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4bn\" (UniqueName: \"kubernetes.io/projected/3d935acc-a244-4c1f-a9f8-9924fa8b61f1-kube-api-access-gn4bn\") pod \"dnsmasq-dns-55478c4467-845df\" (UID: \"3d935acc-a244-4c1f-a9f8-9924fa8b61f1\") " pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.014310 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.165749 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260761 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260894 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.260944 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.261039 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.261069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") pod \"112eb3e9-cf11-4513-be2d-53a42670413e\" (UID: \"112eb3e9-cf11-4513-be2d-53a42670413e\") " Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.271423 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt" (OuterVolumeSpecName: "kube-api-access-pm9pt") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "kube-api-access-pm9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.325210 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.328136 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.329975 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.334574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config" (OuterVolumeSpecName: "config") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.342917 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "112eb3e9-cf11-4513-be2d-53a42670413e" (UID: "112eb3e9-cf11-4513-be2d-53a42670413e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363485 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363543 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm9pt\" (UniqueName: \"kubernetes.io/projected/112eb3e9-cf11-4513-be2d-53a42670413e-kube-api-access-pm9pt\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363569 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363584 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363597 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.363607 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/112eb3e9-cf11-4513-be2d-53a42670413e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.572267 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-845df"] Feb 03 10:29:27 crc kubenswrapper[5010]: W0203 10:29:27.576134 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d935acc_a244_4c1f_a9f8_9924fa8b61f1.slice/crio-c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3 WatchSource:0}: Error finding container c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3: Status 404 returned error can't find the container with id c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3 Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.802394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerStarted","Data":"c0d0ee1a3dd0f8d1ec602e0dd75b1cdb018a087f84d0cc15e397b26c541c7dd3"} Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805035 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" event={"ID":"112eb3e9-cf11-4513-be2d-53a42670413e","Type":"ContainerDied","Data":"9696bbc5c05e1ee911f02b7758d1162dc7d17512676a3ce246b9266d4a35accd"} Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805075 5010 scope.go:117] "RemoveContainer" containerID="e50968d30732ac2c762348838c8f14a711f5720b5d244d0a09fd6ce7ae975514" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.805202 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-5t6hf" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.835375 5010 scope.go:117] "RemoveContainer" containerID="84b72c9b54d05dcdbccb71e2a8f9d59046f32de5c34fe094370a4de1492b0639" Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.848945 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:27 crc kubenswrapper[5010]: I0203 10:29:27.860078 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-5t6hf"] Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.517018 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" path="/var/lib/kubelet/pods/112eb3e9-cf11-4513-be2d-53a42670413e/volumes" Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.815506 5010 generic.go:334] "Generic (PLEG): container finished" podID="3d935acc-a244-4c1f-a9f8-9924fa8b61f1" containerID="52b75dc93253253ed5c3a050029beed8bfde18a85d4c17d4fcd8b1f6f28c4e39" exitCode=0 Feb 03 10:29:28 crc kubenswrapper[5010]: I0203 10:29:28.815586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerDied","Data":"52b75dc93253253ed5c3a050029beed8bfde18a85d4c17d4fcd8b1f6f28c4e39"} Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.830534 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-845df" event={"ID":"3d935acc-a244-4c1f-a9f8-9924fa8b61f1","Type":"ContainerStarted","Data":"1aedaeb7d50a68d6d9432c3805aea359909c960c180d48e1a2adcc84f7707c3f"} Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.830986 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:29 crc kubenswrapper[5010]: I0203 10:29:29.855888 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-845df" podStartSLOduration=3.855839686 podStartE2EDuration="3.855839686s" podCreationTimestamp="2026-02-03 10:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:29:29.853414484 +0000 UTC m=+1640.009390623" watchObservedRunningTime="2026-02-03 10:29:29.855839686 +0000 UTC m=+1640.011815825" Feb 03 10:29:36 crc kubenswrapper[5010]: I0203 10:29:36.502792 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:36 crc kubenswrapper[5010]: E0203 10:29:36.505022 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.016404 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-845df" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.084646 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.085003 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" containerID="cri-o://e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" gracePeriod=10 Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.572842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688192 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688268 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688420 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688438 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688495 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688518 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.688537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") pod \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\" (UID: \"d1f7d409-fa49-4bd1-a07b-0c349e72b21c\") " Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.708780 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d" (OuterVolumeSpecName: "kube-api-access-z8q2d") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "kube-api-access-z8q2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.742511 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.746744 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.755249 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.757171 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.764896 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config" (OuterVolumeSpecName: "config") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.765611 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1f7d409-fa49-4bd1-a07b-0c349e72b21c" (UID: "d1f7d409-fa49-4bd1-a07b-0c349e72b21c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790770 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790806 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790816 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8q2d\" (UniqueName: \"kubernetes.io/projected/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-kube-api-access-z8q2d\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790830 5010 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790839 5010 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790848 5010 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.790859 5010 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f7d409-fa49-4bd1-a07b-0c349e72b21c-config\") on node \"crc\" DevicePath \"\"" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900220 5010 generic.go:334] "Generic (PLEG): container finished" podID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" exitCode=0 Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900288 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900320 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" event={"ID":"d1f7d409-fa49-4bd1-a07b-0c349e72b21c","Type":"ContainerDied","Data":"b90baa1d4d9f0ddbc89dd4b10b55aff56a1978d65ba73c5d42c76702253705b7"} Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900348 5010 scope.go:117] "RemoveContainer" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.900495 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mjf7k" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.940145 5010 scope.go:117] "RemoveContainer" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.977441 5010 scope.go:117] "RemoveContainer" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: E0203 10:29:37.978645 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": container with ID starting with e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2 not found: ID does not exist" containerID="e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.978699 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2"} err="failed to get container status \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": rpc error: code = NotFound desc = could not find container \"e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2\": container with ID starting with e4a87bedd6179cc30e40e0b4f219c25997a59185cf20c72f65fcf5b5a4e049f2 not found: ID does not exist" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.978730 5010 scope.go:117] "RemoveContainer" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: E0203 10:29:37.979771 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": container with ID starting with 56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d not found: ID does not exist" containerID="56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.979849 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d"} err="failed to get container status \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": rpc error: code = NotFound desc = could not find container \"56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d\": container with ID starting with 56d169c276fa4095404764411251a3851d82d66b94873e66867ac3bc5321f85d not found: ID does not exist" Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.985737 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:37 crc kubenswrapper[5010]: I0203 10:29:37.995874 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mjf7k"] Feb 03 10:29:38 crc kubenswrapper[5010]: I0203 10:29:38.513456 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" path="/var/lib/kubelet/pods/d1f7d409-fa49-4bd1-a07b-0c349e72b21c/volumes" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.805467 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808383 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808413 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808430 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808438 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808454 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808462 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: E0203 10:29:45.808496 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808503 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="init" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808735 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="112eb3e9-cf11-4513-be2d-53a42670413e" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.808767 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f7d409-fa49-4bd1-a07b-0c349e72b21c" containerName="dnsmasq-dns" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.809868 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.813070 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.814664 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.814980 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.816433 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.821307 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855710 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855777 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855884 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.855912 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.956988 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957033 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957097 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.957123 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.963165 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.963296 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.964333 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:45 crc kubenswrapper[5010]: I0203 10:29:45.973505 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mg749\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.134192 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.683960 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749"] Feb 03 10:29:46 crc kubenswrapper[5010]: I0203 10:29:46.991631 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerStarted","Data":"77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b"} Feb 03 10:29:47 crc kubenswrapper[5010]: I0203 10:29:47.502446 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:29:47 crc kubenswrapper[5010]: E0203 10:29:47.503131 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:29:50 crc kubenswrapper[5010]: I0203 10:29:50.022423 5010 generic.go:334] "Generic (PLEG): container finished" podID="543f315d-d2f8-497f-a2c1-1a929c1611be" containerID="19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19" exitCode=0 Feb 03 10:29:50 crc kubenswrapper[5010]: I0203 10:29:50.022940 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerDied","Data":"19fb7b1a68b1ff52895088d592e7289b1fff4b1eeeb28c2089dc4b6320456f19"} Feb 03 10:29:51 crc kubenswrapper[5010]: I0203 10:29:51.033551 5010 generic.go:334] "Generic (PLEG): container finished" podID="9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf" containerID="09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736" exitCode=0 Feb 03 10:29:51 crc kubenswrapper[5010]: I0203 10:29:51.033644 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerDied","Data":"09c52085ec4e3b7039b34527eb3963f0af7d7da40200e027a5bee0de0a333736"} Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.144072 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.146082 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.148967 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.149288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.166470 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.361873 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.361955 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.362044 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.396079 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.396314 5010 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 03 10:30:00 crc kubenswrapper[5010]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 03 10:30:00 crc kubenswrapper[5010]: - hosts: all Feb 03 10:30:00 crc kubenswrapper[5010]: strategy: linear Feb 03 10:30:00 crc kubenswrapper[5010]: tasks: Feb 03 10:30:00 crc kubenswrapper[5010]: - name: Enable podified-repos Feb 03 10:30:00 crc kubenswrapper[5010]: become: true Feb 03 10:30:00 crc kubenswrapper[5010]: ansible.builtin.shell: | Feb 03 10:30:00 crc kubenswrapper[5010]: set -euxo pipefail Feb 03 10:30:00 crc kubenswrapper[5010]: pushd /var/tmp Feb 03 10:30:00 crc kubenswrapper[5010]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 03 10:30:00 crc kubenswrapper[5010]: pushd repo-setup-main Feb 03 10:30:00 crc kubenswrapper[5010]: python3 -m venv ./venv Feb 03 10:30:00 crc kubenswrapper[5010]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 03 10:30:00 crc kubenswrapper[5010]: ./venv/bin/repo-setup current-podified -b antelope Feb 03 10:30:00 crc kubenswrapper[5010]: popd Feb 03 10:30:00 crc kubenswrapper[5010]: rm -rf repo-setup-main Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 03 10:30:00 crc kubenswrapper[5010]: edpm_override_hosts: openstack-edpm-ipam Feb 03 10:30:00 crc kubenswrapper[5010]: edpm_service_type: repo-setup Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: Feb 03 10:30:00 crc kubenswrapper[5010]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsh87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_openstack(43ecdc43-d866-4902-89cb-0ce68e89fe05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 03 10:30:00 crc kubenswrapper[5010]: > logger="UnhandledError" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.397419 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.466927 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.467655 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.467706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.468758 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.472814 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.482485 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"collect-profiles-29501910-7ksgb\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.508636 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:00 crc kubenswrapper[5010]: E0203 10:30:00.509086 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.515426 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:00 crc kubenswrapper[5010]: I0203 10:30:00.987899 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 10:30:00 crc kubenswrapper[5010]: W0203 10:30:00.990511 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e554f0_be79_4c9c_974d_f25941ae930e.slice/crio-a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1 WatchSource:0}: Error finding container a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1: Status 404 returned error can't find the container with id a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1 Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.141288 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerStarted","Data":"a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.144279 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"543f315d-d2f8-497f-a2c1-1a929c1611be","Type":"ContainerStarted","Data":"dd4807d6c0736ad636d34b769cd1839372915e22b697abfb3ff750b12a7a18fc"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.145613 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.148122 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf","Type":"ContainerStarted","Data":"bf8498e9e77d45722feb55d8cf9c2655523b1106b4098f04a3b76453dfa0da9a"} Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.148744 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:30:01 crc kubenswrapper[5010]: E0203 10:30:01.148881 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.175531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.175507889 podStartE2EDuration="47.175507889s" podCreationTimestamp="2026-02-03 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:30:01.170570833 +0000 UTC m=+1671.326546952" watchObservedRunningTime="2026-02-03 10:30:01.175507889 +0000 UTC m=+1671.331484018" Feb 03 10:30:01 crc kubenswrapper[5010]: I0203 10:30:01.226229 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.226195612 podStartE2EDuration="47.226195612s" podCreationTimestamp="2026-02-03 10:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 10:30:01.222059037 +0000 UTC m=+1671.378035166" watchObservedRunningTime="2026-02-03 10:30:01.226195612 +0000 UTC m=+1671.382171741" Feb 03 10:30:02 crc kubenswrapper[5010]: I0203 10:30:02.158937 5010 generic.go:334] "Generic (PLEG): container finished" podID="34e554f0-be79-4c9c-974d-f25941ae930e" containerID="50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca" exitCode=0 Feb 03 10:30:02 crc kubenswrapper[5010]: I0203 10:30:02.159047 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerDied","Data":"50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca"} Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.482975 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.634743 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.636058 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.636334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") pod \"34e554f0-be79-4c9c-974d-f25941ae930e\" (UID: \"34e554f0-be79-4c9c-974d-f25941ae930e\") " Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.637201 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume" (OuterVolumeSpecName: "config-volume") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.642394 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.645843 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt" (OuterVolumeSpecName: "kube-api-access-c8czt") pod "34e554f0-be79-4c9c-974d-f25941ae930e" (UID: "34e554f0-be79-4c9c-974d-f25941ae930e"). InnerVolumeSpecName "kube-api-access-c8czt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738641 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34e554f0-be79-4c9c-974d-f25941ae930e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738687 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8czt\" (UniqueName: \"kubernetes.io/projected/34e554f0-be79-4c9c-974d-f25941ae930e-kube-api-access-c8czt\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:03 crc kubenswrapper[5010]: I0203 10:30:03.738697 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34e554f0-be79-4c9c-974d-f25941ae930e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185311 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" event={"ID":"34e554f0-be79-4c9c-974d-f25941ae930e","Type":"ContainerDied","Data":"a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1"} Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185376 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12583ebc18635cfe4abc59f20a5088499fc468fa5cbdc945925543afdc66fa1" Feb 03 10:30:04 crc kubenswrapper[5010]: I0203 10:30:04.185437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb" Feb 03 10:30:12 crc kubenswrapper[5010]: I0203 10:30:12.221556 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.272937 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerStarted","Data":"532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda"} Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.297135 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" podStartSLOduration=2.761535313 podStartE2EDuration="28.297112397s" podCreationTimestamp="2026-02-03 10:29:45 +0000 UTC" firstStartedPulling="2026-02-03 10:29:46.683804034 +0000 UTC m=+1656.839780163" lastFinishedPulling="2026-02-03 10:30:12.219381118 +0000 UTC m=+1682.375357247" observedRunningTime="2026-02-03 10:30:13.288535618 +0000 UTC m=+1683.444511767" watchObservedRunningTime="2026-02-03 10:30:13.297112397 +0000 UTC m=+1683.453088526" Feb 03 10:30:13 crc kubenswrapper[5010]: I0203 10:30:13.504122 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:13 crc kubenswrapper[5010]: E0203 10:30:13.505387 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:15 crc kubenswrapper[5010]: I0203 10:30:15.138448 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 10:30:15 crc kubenswrapper[5010]: I0203 10:30:15.351467 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 10:30:19 crc kubenswrapper[5010]: I0203 10:30:19.652732 5010 scope.go:117] "RemoveContainer" containerID="387dd9fd0160568ebec8f1a6d5d1c5088020bf051ddedc665506a7243fc7b05d" Feb 03 10:30:19 crc kubenswrapper[5010]: I0203 10:30:19.686997 5010 scope.go:117] "RemoveContainer" containerID="ecc134dc06388d88bee9d6893b38c4e64f29d454add40ba84636bf94ef646d8a" Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.381572 5010 generic.go:334] "Generic (PLEG): container finished" podID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerID="532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda" exitCode=0 Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.381661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerDied","Data":"532c0063bf8daca6dcc284fc64ff56a88aee7dc3a78ab9eb4836585e9d528bda"} Feb 03 10:30:25 crc kubenswrapper[5010]: I0203 10:30:25.502882 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:25 crc kubenswrapper[5010]: E0203 10:30:25.503184 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.869870 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.996334 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.996550 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.997415 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:26 crc kubenswrapper[5010]: I0203 10:30:26.997481 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") pod \"43ecdc43-d866-4902-89cb-0ce68e89fe05\" (UID: \"43ecdc43-d866-4902-89cb-0ce68e89fe05\") " Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.003020 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.009698 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87" (OuterVolumeSpecName: "kube-api-access-rsh87") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "kube-api-access-rsh87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.025906 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.033053 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory" (OuterVolumeSpecName: "inventory") pod "43ecdc43-d866-4902-89cb-0ce68e89fe05" (UID: "43ecdc43-d866-4902-89cb-0ce68e89fe05"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100127 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100165 5010 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100175 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ecdc43-d866-4902-89cb-0ce68e89fe05-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.100187 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsh87\" (UniqueName: \"kubernetes.io/projected/43ecdc43-d866-4902-89cb-0ce68e89fe05-kube-api-access-rsh87\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423821 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" event={"ID":"43ecdc43-d866-4902-89cb-0ce68e89fe05","Type":"ContainerDied","Data":"77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b"} Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423864 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77fbac41963512257d1526ae37ef85f2001ddf70c4b35586b4cb448e373c633b" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.423918 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mg749" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.573539 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:27 crc kubenswrapper[5010]: E0203 10:30:27.573975 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.573993 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: E0203 10:30:27.574002 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574009 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574187 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ecdc43-d866-4902-89cb-0ce68e89fe05" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574205 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" containerName="collect-profiles" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.574811 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579613 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579659 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579748 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.579830 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.592446 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720466 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720830 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.720879 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822728 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822800 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.822858 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.830441 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.833149 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.840126 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-r8zqk\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:27 crc kubenswrapper[5010]: I0203 10:30:27.893837 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:28 crc kubenswrapper[5010]: I0203 10:30:28.441590 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk"] Feb 03 10:30:28 crc kubenswrapper[5010]: I0203 10:30:28.455205 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:30:29 crc kubenswrapper[5010]: I0203 10:30:29.441002 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerStarted","Data":"ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955"} Feb 03 10:30:30 crc kubenswrapper[5010]: I0203 10:30:30.455105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerStarted","Data":"520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20"} Feb 03 10:30:30 crc kubenswrapper[5010]: I0203 10:30:30.476376 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" podStartSLOduration=2.645414555 podStartE2EDuration="3.476357958s" podCreationTimestamp="2026-02-03 10:30:27 +0000 UTC" firstStartedPulling="2026-02-03 10:30:28.455012891 +0000 UTC m=+1698.610989020" lastFinishedPulling="2026-02-03 10:30:29.285956294 +0000 UTC m=+1699.441932423" observedRunningTime="2026-02-03 10:30:30.472037028 +0000 UTC m=+1700.628013177" watchObservedRunningTime="2026-02-03 10:30:30.476357958 +0000 UTC m=+1700.632334087" Feb 03 10:30:32 crc kubenswrapper[5010]: I0203 10:30:32.476630 5010 generic.go:334] "Generic (PLEG): container finished" podID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerID="520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20" exitCode=0 Feb 03 10:30:32 crc kubenswrapper[5010]: I0203 10:30:32.476704 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerDied","Data":"520e85302ebeae40d4d393da385fd7d92cc796319d6b0edc6e78b25df2accb20"} Feb 03 10:30:33 crc kubenswrapper[5010]: I0203 10:30:33.936167 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.047996 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.048382 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.048660 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") pod \"36d3f978-a301-44e6-a401-72e94c9f70ad\" (UID: \"36d3f978-a301-44e6-a401-72e94c9f70ad\") " Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.054062 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x" (OuterVolumeSpecName: "kube-api-access-gkq5x") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "kube-api-access-gkq5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.075175 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.081912 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory" (OuterVolumeSpecName: "inventory") pod "36d3f978-a301-44e6-a401-72e94c9f70ad" (UID: "36d3f978-a301-44e6-a401-72e94c9f70ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151185 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151558 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d3f978-a301-44e6-a401-72e94c9f70ad-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.151578 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkq5x\" (UniqueName: \"kubernetes.io/projected/36d3f978-a301-44e6-a401-72e94c9f70ad-kube-api-access-gkq5x\") on node \"crc\" DevicePath \"\"" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493641 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" event={"ID":"36d3f978-a301-44e6-a401-72e94c9f70ad","Type":"ContainerDied","Data":"ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955"} Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493687 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6a116bb479bd12b5c8f968f81170c52418ccece8e5dc2d957f317923c84955" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.493763 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-r8zqk" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.568659 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:34 crc kubenswrapper[5010]: E0203 10:30:34.569166 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.569190 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.569478 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d3f978-a301-44e6-a401-72e94c9f70ad" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.570321 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.573521 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575549 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575595 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.575973 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.582087 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661075 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661325 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.661440 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763682 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763839 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763877 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.763963 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.769547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.770152 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.778814 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.782014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:34 crc kubenswrapper[5010]: I0203 10:30:34.892866 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:30:35 crc kubenswrapper[5010]: I0203 10:30:35.494185 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf"] Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.514013 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerStarted","Data":"1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397"} Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.514661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerStarted","Data":"2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1"} Feb 03 10:30:36 crc kubenswrapper[5010]: I0203 10:30:36.542053 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" podStartSLOduration=2.059071199 podStartE2EDuration="2.542028253s" podCreationTimestamp="2026-02-03 10:30:34 +0000 UTC" firstStartedPulling="2026-02-03 10:30:35.499025859 +0000 UTC m=+1705.655001988" lastFinishedPulling="2026-02-03 10:30:35.981982863 +0000 UTC m=+1706.137959042" observedRunningTime="2026-02-03 10:30:36.534704536 +0000 UTC m=+1706.690680665" watchObservedRunningTime="2026-02-03 10:30:36.542028253 +0000 UTC m=+1706.698004382" Feb 03 10:30:37 crc kubenswrapper[5010]: I0203 10:30:37.503005 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:37 crc kubenswrapper[5010]: E0203 10:30:37.503727 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:30:51 crc kubenswrapper[5010]: I0203 10:30:51.503116 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:30:51 crc kubenswrapper[5010]: E0203 10:30:51.504152 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:02 crc kubenswrapper[5010]: I0203 10:31:02.503281 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:02 crc kubenswrapper[5010]: E0203 10:31:02.504099 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:14 crc kubenswrapper[5010]: I0203 10:31:14.502820 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:14 crc kubenswrapper[5010]: E0203 10:31:14.504263 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:19 crc kubenswrapper[5010]: I0203 10:31:19.851304 5010 scope.go:117] "RemoveContainer" containerID="284a769b3c25b0cdea9e5ddf661cc8aed190c024694193ebf7516c57518d0765" Feb 03 10:31:29 crc kubenswrapper[5010]: I0203 10:31:29.501967 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:29 crc kubenswrapper[5010]: E0203 10:31:29.502878 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:41 crc kubenswrapper[5010]: I0203 10:31:41.502771 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:41 crc kubenswrapper[5010]: E0203 10:31:41.503490 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:31:53 crc kubenswrapper[5010]: I0203 10:31:53.502308 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:31:53 crc kubenswrapper[5010]: E0203 10:31:53.503826 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:08 crc kubenswrapper[5010]: I0203 10:32:08.503319 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:08 crc kubenswrapper[5010]: E0203 10:32:08.504106 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:19 crc kubenswrapper[5010]: I0203 10:32:19.981589 5010 scope.go:117] "RemoveContainer" containerID="204ff7b5906df6362a9178ddb04b60b73173622cbd63d2c7b2264912f116e282" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.056869 5010 scope.go:117] "RemoveContainer" containerID="4198ce459a693b38bf47283f126a3f929ce83d42492541b2b961db5cda2701f4" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.103324 5010 scope.go:117] "RemoveContainer" containerID="1bd8603024a229914190fc469345835e8b37de52fd7f1951f53bc0059a29de92" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.127711 5010 scope.go:117] "RemoveContainer" containerID="67d6ea389313e14d97c8b6c045808e3c44adad70ca29d47d5585704fabd03630" Feb 03 10:32:20 crc kubenswrapper[5010]: I0203 10:32:20.509817 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:20 crc kubenswrapper[5010]: E0203 10:32:20.510189 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:31 crc kubenswrapper[5010]: I0203 10:32:31.502394 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:31 crc kubenswrapper[5010]: E0203 10:32:31.503045 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:43 crc kubenswrapper[5010]: I0203 10:32:43.502466 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:43 crc kubenswrapper[5010]: E0203 10:32:43.503265 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:32:57 crc kubenswrapper[5010]: I0203 10:32:57.502477 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:32:57 crc kubenswrapper[5010]: E0203 10:32:57.503648 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.119929 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.126024 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.135635 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.232693 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.232837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.233272 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.309619 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.312459 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336503 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336591 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.336683 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.337890 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.340096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.343110 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.378175 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"redhat-marketplace-n5pfd\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440186 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440521 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.440972 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.477524 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.543816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.544018 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.544094 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.545080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.545156 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.572762 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"community-operators-k5k8q\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:05 crc kubenswrapper[5010]: I0203 10:33:05.644350 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:06 crc kubenswrapper[5010]: I0203 10:33:06.401141 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:06 crc kubenswrapper[5010]: I0203 10:33:06.451547 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.441544 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" exitCode=0 Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.441747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.442722 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"8b3b270bebd4977e84cdc37b71c34f9d391c7521c5a7a8426582efd5470a62cc"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445530 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" exitCode=0 Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5"} Feb 03 10:33:07 crc kubenswrapper[5010]: I0203 10:33:07.445638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"5b727abf7e342cd4d1d4e63479302a3e7250e0f31e5c2175523f9baf9010f5bf"} Feb 03 10:33:08 crc kubenswrapper[5010]: I0203 10:33:08.465195 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} Feb 03 10:33:08 crc kubenswrapper[5010]: I0203 10:33:08.469721 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.480913 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" exitCode=0 Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.481024 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.484188 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" exitCode=0 Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.484247 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} Feb 03 10:33:09 crc kubenswrapper[5010]: I0203 10:33:09.503141 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:33:09 crc kubenswrapper[5010]: E0203 10:33:09.503709 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.515508 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerStarted","Data":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.518638 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerStarted","Data":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.550707 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5pfd" podStartSLOduration=3.593418332 podStartE2EDuration="6.55065389s" podCreationTimestamp="2026-02-03 10:33:05 +0000 UTC" firstStartedPulling="2026-02-03 10:33:07.445794012 +0000 UTC m=+1857.601770141" lastFinishedPulling="2026-02-03 10:33:10.40302957 +0000 UTC m=+1860.559005699" observedRunningTime="2026-02-03 10:33:11.544119643 +0000 UTC m=+1861.700095772" watchObservedRunningTime="2026-02-03 10:33:11.55065389 +0000 UTC m=+1861.706630019" Feb 03 10:33:11 crc kubenswrapper[5010]: I0203 10:33:11.580911 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5k8q" podStartSLOduration=3.521801696 podStartE2EDuration="6.580890736s" podCreationTimestamp="2026-02-03 10:33:05 +0000 UTC" firstStartedPulling="2026-02-03 10:33:07.448956804 +0000 UTC m=+1857.604932933" lastFinishedPulling="2026-02-03 10:33:10.508045844 +0000 UTC m=+1860.664021973" observedRunningTime="2026-02-03 10:33:11.573584169 +0000 UTC m=+1861.729560308" watchObservedRunningTime="2026-02-03 10:33:11.580890736 +0000 UTC m=+1861.736866865" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.478123 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.479065 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.536345 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.651656 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.659890 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.661818 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:15 crc kubenswrapper[5010]: I0203 10:33:15.723298 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:16 crc kubenswrapper[5010]: I0203 10:33:16.630680 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:17 crc kubenswrapper[5010]: I0203 10:33:17.898272 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:17 crc kubenswrapper[5010]: I0203 10:33:17.898833 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5pfd" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" containerID="cri-o://13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" gracePeriod=2 Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.094050 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.370123 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409362 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409539 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.409593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") pod \"f2d67207-8c20-4786-abde-621b94eada73\" (UID: \"f2d67207-8c20-4786-abde-621b94eada73\") " Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.412060 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities" (OuterVolumeSpecName: "utilities") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.424422 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz" (OuterVolumeSpecName: "kube-api-access-989jz") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "kube-api-access-989jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.437563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2d67207-8c20-4786-abde-621b94eada73" (UID: "f2d67207-8c20-4786-abde-621b94eada73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511818 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511851 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-989jz\" (UniqueName: \"kubernetes.io/projected/f2d67207-8c20-4786-abde-621b94eada73-kube-api-access-989jz\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.511864 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d67207-8c20-4786-abde-621b94eada73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608504 5010 generic.go:334] "Generic (PLEG): container finished" podID="f2d67207-8c20-4786-abde-621b94eada73" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" exitCode=0 Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608605 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5pfd" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608608 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608672 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5pfd" event={"ID":"f2d67207-8c20-4786-abde-621b94eada73","Type":"ContainerDied","Data":"8b3b270bebd4977e84cdc37b71c34f9d391c7521c5a7a8426582efd5470a62cc"} Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.608698 5010 scope.go:117] "RemoveContainer" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.640244 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.649473 5010 scope.go:117] "RemoveContainer" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.650577 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5pfd"] Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.679010 5010 scope.go:117] "RemoveContainer" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.731828 5010 scope.go:117] "RemoveContainer" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.733165 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": container with ID starting with 13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938 not found: ID does not exist" containerID="13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.733258 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938"} err="failed to get container status \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": rpc error: code = NotFound desc = could not find container \"13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938\": container with ID starting with 13779d207f73eb455de95aa53c92ca689841b1f58de16a95a079d51445569938 not found: ID does not exist" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.733302 5010 scope.go:117] "RemoveContainer" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.734287 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": container with ID starting with d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697 not found: ID does not exist" containerID="d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.734441 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697"} err="failed to get container status \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": rpc error: code = NotFound desc = could not find container \"d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697\": container with ID starting with d1fb3dce7267d3dfebecfa9527e3d582e6bc631c65cea833b64ea325f9d1e697 not found: ID does not exist" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.734497 5010 scope.go:117] "RemoveContainer" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: E0203 10:33:18.735039 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": container with ID starting with 04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3 not found: ID does not exist" containerID="04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3" Feb 03 10:33:18 crc kubenswrapper[5010]: I0203 10:33:18.735082 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3"} err="failed to get container status \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": rpc error: code = NotFound desc = could not find container \"04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3\": container with ID starting with 04c9cc5a5a4cd6d4d704aec7a40619ebb2db979bd0973bc85bd4a92113b70fb3 not found: ID does not exist" Feb 03 10:33:19 crc kubenswrapper[5010]: I0203 10:33:19.622426 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5k8q" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" containerID="cri-o://10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" gracePeriod=2 Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.183805 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290832 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290938 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.290982 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") pod \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\" (UID: \"07b694c8-ca4a-4c06-9a6a-786e7f8501fc\") " Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.291950 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities" (OuterVolumeSpecName: "utilities") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.298389 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf" (OuterVolumeSpecName: "kube-api-access-sjzvf") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "kube-api-access-sjzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.350909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07b694c8-ca4a-4c06-9a6a-786e7f8501fc" (UID: "07b694c8-ca4a-4c06-9a6a-786e7f8501fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393352 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393399 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzvf\" (UniqueName: \"kubernetes.io/projected/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-kube-api-access-sjzvf\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.393415 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07b694c8-ca4a-4c06-9a6a-786e7f8501fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.519802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d67207-8c20-4786-abde-621b94eada73" path="/var/lib/kubelet/pods/f2d67207-8c20-4786-abde-621b94eada73/volumes" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.637925 5010 generic.go:334] "Generic (PLEG): container finished" podID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" exitCode=0 Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638040 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5k8q" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638091 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5k8q" event={"ID":"07b694c8-ca4a-4c06-9a6a-786e7f8501fc","Type":"ContainerDied","Data":"5b727abf7e342cd4d1d4e63479302a3e7250e0f31e5c2175523f9baf9010f5bf"} Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.638115 5010 scope.go:117] "RemoveContainer" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.666720 5010 scope.go:117] "RemoveContainer" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.684012 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.712069 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5k8q"] Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.724654 5010 scope.go:117] "RemoveContainer" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.766979 5010 scope.go:117] "RemoveContainer" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.767742 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": container with ID starting with 10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb not found: ID does not exist" containerID="10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.767956 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb"} err="failed to get container status \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": rpc error: code = NotFound desc = could not find container \"10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb\": container with ID starting with 10a4520aa3bc2390b54f41b8fe12a47ea3a0cdd04893d055f4afe16a664ec4bb not found: ID does not exist" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768092 5010 scope.go:117] "RemoveContainer" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.768704 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": container with ID starting with 7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e not found: ID does not exist" containerID="7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768740 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e"} err="failed to get container status \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": rpc error: code = NotFound desc = could not find container \"7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e\": container with ID starting with 7796bd8573df93a232f70ba25873c3b6ed23dfeb6afefe573eb43ec3546bd49e not found: ID does not exist" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.768767 5010 scope.go:117] "RemoveContainer" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: E0203 10:33:20.769074 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": container with ID starting with 7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5 not found: ID does not exist" containerID="7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5" Feb 03 10:33:20 crc kubenswrapper[5010]: I0203 10:33:20.769121 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5"} err="failed to get container status \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": rpc error: code = NotFound desc = could not find container \"7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5\": container with ID starting with 7c65ecc4d1675be30d5f625779c17a3952d9b47b1f7c37ee2e9b05592b3c8ca5 not found: ID does not exist" Feb 03 10:33:21 crc kubenswrapper[5010]: I0203 10:33:21.503259 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:33:22 crc kubenswrapper[5010]: I0203 10:33:22.516011 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" path="/var/lib/kubelet/pods/07b694c8-ca4a-4c06-9a6a-786e7f8501fc/volumes" Feb 03 10:33:22 crc kubenswrapper[5010]: I0203 10:33:22.667567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} Feb 03 10:33:25 crc kubenswrapper[5010]: I0203 10:33:25.701673 5010 generic.go:334] "Generic (PLEG): container finished" podID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerID="1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397" exitCode=0 Feb 03 10:33:25 crc kubenswrapper[5010]: I0203 10:33:25.701765 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerDied","Data":"1c3d5f240ee62be6fa51825a10963f07b9c3d37c85ce03fca5f277444b1d0397"} Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.252611 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.374972 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.375564 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.375842 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.376008 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") pod \"2d389772-7902-4aca-8bc3-03a0708fbaa2\" (UID: \"2d389772-7902-4aca-8bc3-03a0708fbaa2\") " Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.398704 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2" (OuterVolumeSpecName: "kube-api-access-jmtk2") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "kube-api-access-jmtk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.401516 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.428452 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory" (OuterVolumeSpecName: "inventory") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.429024 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d389772-7902-4aca-8bc3-03a0708fbaa2" (UID: "2d389772-7902-4aca-8bc3-03a0708fbaa2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.479931 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480002 5010 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480046 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d389772-7902-4aca-8bc3-03a0708fbaa2-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.480061 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmtk2\" (UniqueName: \"kubernetes.io/projected/2d389772-7902-4aca-8bc3-03a0708fbaa2-kube-api-access-jmtk2\") on node \"crc\" DevicePath \"\"" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.728745 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" event={"ID":"2d389772-7902-4aca-8bc3-03a0708fbaa2","Type":"ContainerDied","Data":"2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1"} Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.729177 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef65aac28dddf89deb7ce485b857019655fec507cad6ee360424ff04f3a20c1" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.728802 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.850657 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851410 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851438 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851450 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851459 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851471 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851478 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-utilities" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851508 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851516 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851534 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851541 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851557 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851564 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="extract-content" Feb 03 10:33:27 crc kubenswrapper[5010]: E0203 10:33:27.851577 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.851583 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869655 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b694c8-ca4a-4c06-9a6a-786e7f8501fc" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869712 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d67207-8c20-4786-abde-621b94eada73" containerName="registry-server" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.869762 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d389772-7902-4aca-8bc3-03a0708fbaa2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.871322 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.871474 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.881614 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.883404 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.883602 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.884073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.997948 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.998008 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:27 crc kubenswrapper[5010]: I0203 10:33:27.998110 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.099951 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.100081 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.100109 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.109611 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.111750 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.120894 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.198874 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:33:28 crc kubenswrapper[5010]: I0203 10:33:28.796978 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs"] Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.746760 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerStarted","Data":"fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001"} Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.748325 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerStarted","Data":"9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5"} Feb 03 10:33:29 crc kubenswrapper[5010]: I0203 10:33:29.778648 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" podStartSLOduration=2.280925344 podStartE2EDuration="2.778618586s" podCreationTimestamp="2026-02-03 10:33:27 +0000 UTC" firstStartedPulling="2026-02-03 10:33:28.803182502 +0000 UTC m=+1878.959158631" lastFinishedPulling="2026-02-03 10:33:29.300875744 +0000 UTC m=+1879.456851873" observedRunningTime="2026-02-03 10:33:29.765957411 +0000 UTC m=+1879.921933560" watchObservedRunningTime="2026-02-03 10:33:29.778618586 +0000 UTC m=+1879.934594715" Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.136030 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.148072 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.158789 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nh655"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.174430 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.184385 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9qjk8"] Feb 03 10:33:37 crc kubenswrapper[5010]: I0203 10:33:37.194562 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-caa6-account-create-update-69sjp"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.037377 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.047728 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3037-account-create-update-847d2"] Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.518124 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf6f6f7-d993-486c-9dcf-63d6b298f898" path="/var/lib/kubelet/pods/7cf6f6f7-d993-486c-9dcf-63d6b298f898/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.519261 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6faff8-cfd9-4253-8dc3-d3df2b3252be" path="/var/lib/kubelet/pods/9a6faff8-cfd9-4253-8dc3-d3df2b3252be/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.520232 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e03bfed-c1c6-4165-86c0-6c1415a30081" path="/var/lib/kubelet/pods/9e03bfed-c1c6-4165-86c0-6c1415a30081/volumes" Feb 03 10:33:38 crc kubenswrapper[5010]: I0203 10:33:38.521151 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996" path="/var/lib/kubelet/pods/b6d00c2e-f3a5-4332-b9c1-0cffe4dd1996/volumes" Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.049577 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.062113 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-g8ncl"] Feb 03 10:33:40 crc kubenswrapper[5010]: I0203 10:33:40.517521 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0505d3aa-dab1-4f61-af12-69804ff1345a" path="/var/lib/kubelet/pods/0505d3aa-dab1-4f61-af12-69804ff1345a/volumes" Feb 03 10:33:41 crc kubenswrapper[5010]: I0203 10:33:41.043381 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:33:41 crc kubenswrapper[5010]: I0203 10:33:41.056692 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-06a9-account-create-update-764vb"] Feb 03 10:33:42 crc kubenswrapper[5010]: I0203 10:33:42.517753 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d0be64-0307-43ee-9c2c-905f1d22c267" path="/var/lib/kubelet/pods/e2d0be64-0307-43ee-9c2c-905f1d22c267/volumes" Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.046555 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.054016 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-742kg"] Feb 03 10:34:04 crc kubenswrapper[5010]: I0203 10:34:04.513842 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0efd6c3-d0dc-4ebc-a116-d7e811177fa6" path="/var/lib/kubelet/pods/c0efd6c3-d0dc-4ebc-a116-d7e811177fa6/volumes" Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.045541 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.056619 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.065647 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.076909 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.086660 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5fk6k"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.100054 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-z7nxm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.112099 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-54zjm"] Feb 03 10:34:15 crc kubenswrapper[5010]: I0203 10:34:15.124376 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f06e-account-create-update-glqr6"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.040385 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.053999 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.068098 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5102-account-create-update-nv7jr"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.083401 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5b83-account-create-update-hrlzs"] Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.519009 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5b7adb-c7e4-4014-b37f-674861868979" path="/var/lib/kubelet/pods/1c5b7adb-c7e4-4014-b37f-674861868979/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.520527 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8144e4b8-89a7-4c08-86b9-219ea9d4645c" path="/var/lib/kubelet/pods/8144e4b8-89a7-4c08-86b9-219ea9d4645c/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.521438 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83561b9b-ec1d-4ef5-bb05-48780834e40d" path="/var/lib/kubelet/pods/83561b9b-ec1d-4ef5-bb05-48780834e40d/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.522536 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90501abd-ab27-4c54-bd38-239e5803689b" path="/var/lib/kubelet/pods/90501abd-ab27-4c54-bd38-239e5803689b/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.524512 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0e1d98-9045-4a70-8021-ac7dcf843775" path="/var/lib/kubelet/pods/9c0e1d98-9045-4a70-8021-ac7dcf843775/volumes" Feb 03 10:34:16 crc kubenswrapper[5010]: I0203 10:34:16.525677 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce7685e-8301-4c02-8e1b-386646d84264" path="/var/lib/kubelet/pods/fce7685e-8301-4c02-8e1b-386646d84264/volumes" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.237177 5010 scope.go:117] "RemoveContainer" containerID="ecc37d219487243243570207ff635b3c963683b6d23c8e89c6a83dba41ce9ef2" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.277801 5010 scope.go:117] "RemoveContainer" containerID="867e48e65d90b62aadc6ddb63e004c04adf8450508e9b1413072265967186694" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.332792 5010 scope.go:117] "RemoveContainer" containerID="5fd86f16e791f88f37d27cd6030a471785bd1ebc82355253888f61f74084bc56" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.399891 5010 scope.go:117] "RemoveContainer" containerID="5e4e86c382f25cd8e9bad9e5d4a055df36fab11bdb33c4c29ebe01bd4ab0d270" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.463957 5010 scope.go:117] "RemoveContainer" containerID="7faf76a4eb10f7d724f9bd83b1eb96f06a13d0bd092d0ededd050f56a18268b5" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.515552 5010 scope.go:117] "RemoveContainer" containerID="783df9142821b00a27f64292c3e26d0dec1e72fe32175024883cc3eb71e60b8b" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.568988 5010 scope.go:117] "RemoveContainer" containerID="02a4a1176b9659935ba9d5084dc9f0a979b3bf3765756a868a98c381f2e4df2c" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.610121 5010 scope.go:117] "RemoveContainer" containerID="175dd1c77e9a4d7de137280af274a9e26cedb6a12f8e491f927188b800875447" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.639838 5010 scope.go:117] "RemoveContainer" containerID="b8b094bb4a4489910ae853a898b2603c46e5923639a21e30a68a2dca1eee68b8" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.665582 5010 scope.go:117] "RemoveContainer" containerID="6a575e19d1e33cee77eb78ea1b934b59f477f565a39712db7cebceb61e00a60f" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.693829 5010 scope.go:117] "RemoveContainer" containerID="e98e811059a9c2d02f4a30baf36100191798d1770e183f8268ccff78ece3d154" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.721909 5010 scope.go:117] "RemoveContainer" containerID="5168c22750de205db4c3cef2742987a3feeb1460c92bf43dadf92987bcb6f04e" Feb 03 10:34:20 crc kubenswrapper[5010]: I0203 10:34:20.747579 5010 scope.go:117] "RemoveContainer" containerID="ea0bf3943fa2c4dbc35b90869ad8099512a31ad225b933cd4437ed8cc1770bf0" Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.060414 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.069321 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-b8wjx"] Feb 03 10:34:32 crc kubenswrapper[5010]: I0203 10:34:32.519629 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81f0078-44e5-4bbc-82ce-3d648e2e32db" path="/var/lib/kubelet/pods/a81f0078-44e5-4bbc-82ce-3d648e2e32db/volumes" Feb 03 10:34:41 crc kubenswrapper[5010]: I0203 10:34:41.039824 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:34:41 crc kubenswrapper[5010]: I0203 10:34:41.053393 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xlhhb"] Feb 03 10:34:42 crc kubenswrapper[5010]: I0203 10:34:42.519449 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3" path="/var/lib/kubelet/pods/a1bd0d83-2e8f-40ad-9e79-fa158b7cbff3/volumes" Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.062742 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.087139 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mvrf4"] Feb 03 10:35:10 crc kubenswrapper[5010]: I0203 10:35:10.520616 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2a4fab-65d6-47ac-9829-2b5b5e8d412c" path="/var/lib/kubelet/pods/5c2a4fab-65d6-47ac-9829-2b5b5e8d412c/volumes" Feb 03 10:35:19 crc kubenswrapper[5010]: I0203 10:35:19.054496 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:35:19 crc kubenswrapper[5010]: I0203 10:35:19.067485 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tptfc"] Feb 03 10:35:20 crc kubenswrapper[5010]: I0203 10:35:20.517376 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ef610c-3c09-4b27-9b97-3a5350388caa" path="/var/lib/kubelet/pods/29ef610c-3c09-4b27-9b97-3a5350388caa/volumes" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.087362 5010 scope.go:117] "RemoveContainer" containerID="3e8d95734ac813f12b8b00d5738e5d5d21869fee2e05c53312641bbb6e639906" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.160444 5010 scope.go:117] "RemoveContainer" containerID="c2c236cbcbee82d440a00402bffa84360077e085e5045869a24060dbc0c3411c" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.226832 5010 scope.go:117] "RemoveContainer" containerID="9f5dffa42b9c5fba57b57a1ca0e358ff317d50df295683f9bc9e42abb84b1b81" Feb 03 10:35:21 crc kubenswrapper[5010]: I0203 10:35:21.269084 5010 scope.go:117] "RemoveContainer" containerID="2f477c6764bb977e8cc3e17e43a92a85fa737e9bdd4ffa07901f030c855e03b4" Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.061446 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.071525 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-swx9t"] Feb 03 10:35:22 crc kubenswrapper[5010]: I0203 10:35:22.520401 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457510b3-7c5a-456d-9df3-54fa7dee8c4b" path="/var/lib/kubelet/pods/457510b3-7c5a-456d-9df3-54fa7dee8c4b/volumes" Feb 03 10:35:23 crc kubenswrapper[5010]: I0203 10:35:23.102068 5010 generic.go:334] "Generic (PLEG): container finished" podID="96722ef6-9c22-4700-8163-b25503d014bd" containerID="fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001" exitCode=0 Feb 03 10:35:23 crc kubenswrapper[5010]: I0203 10:35:23.102140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerDied","Data":"fcc55e058fef1ec901480ccc1a34930515b347f1c4dd1ccd9091bdb239759001"} Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.772632 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907374 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907730 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.907826 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") pod \"96722ef6-9c22-4700-8163-b25503d014bd\" (UID: \"96722ef6-9c22-4700-8163-b25503d014bd\") " Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.918977 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz" (OuterVolumeSpecName: "kube-api-access-xtznz") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "kube-api-access-xtznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.946555 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:35:24 crc kubenswrapper[5010]: I0203 10:35:24.948383 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory" (OuterVolumeSpecName: "inventory") pod "96722ef6-9c22-4700-8163-b25503d014bd" (UID: "96722ef6-9c22-4700-8163-b25503d014bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012085 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012184 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96722ef6-9c22-4700-8163-b25503d014bd-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.012199 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtznz\" (UniqueName: \"kubernetes.io/projected/96722ef6-9c22-4700-8163-b25503d014bd-kube-api-access-xtznz\") on node \"crc\" DevicePath \"\"" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" event={"ID":"96722ef6-9c22-4700-8163-b25503d014bd","Type":"ContainerDied","Data":"9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5"} Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133558 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9581a94b3645ab2ab3a0f1ef5560e2783a192fe6d46b7146f415c304073f83e5" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.133633 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.238032 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:25 crc kubenswrapper[5010]: E0203 10:35:25.238834 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.238867 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.239142 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96722ef6-9c22-4700-8163-b25503d014bd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.240287 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.243772 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.244184 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.247422 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.248997 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.252656 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.420722 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.420796 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.421184 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523355 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523593 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.523647 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.528667 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.535297 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.550583 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5tffc\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:25 crc kubenswrapper[5010]: I0203 10:35:25.563559 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:35:26 crc kubenswrapper[5010]: I0203 10:35:26.159186 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc"] Feb 03 10:35:27 crc kubenswrapper[5010]: I0203 10:35:27.160445 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerStarted","Data":"a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78"} Feb 03 10:35:28 crc kubenswrapper[5010]: I0203 10:35:28.176686 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerStarted","Data":"a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af"} Feb 03 10:35:28 crc kubenswrapper[5010]: I0203 10:35:28.210360 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" podStartSLOduration=1.597371485 podStartE2EDuration="3.210330998s" podCreationTimestamp="2026-02-03 10:35:25 +0000 UTC" firstStartedPulling="2026-02-03 10:35:26.163392155 +0000 UTC m=+1996.319368284" lastFinishedPulling="2026-02-03 10:35:27.776351668 +0000 UTC m=+1997.932327797" observedRunningTime="2026-02-03 10:35:28.205975106 +0000 UTC m=+1998.361951235" watchObservedRunningTime="2026-02-03 10:35:28.210330998 +0000 UTC m=+1998.366307127" Feb 03 10:35:31 crc kubenswrapper[5010]: I0203 10:35:31.049113 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:35:31 crc kubenswrapper[5010]: I0203 10:35:31.056976 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g6tdx"] Feb 03 10:35:32 crc kubenswrapper[5010]: I0203 10:35:32.517622 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad34e68-b20a-486c-b06b-e19f5aaaf917" path="/var/lib/kubelet/pods/bad34e68-b20a-486c-b06b-e19f5aaaf917/volumes" Feb 03 10:35:39 crc kubenswrapper[5010]: I0203 10:35:39.036003 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:35:39 crc kubenswrapper[5010]: I0203 10:35:39.048817 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-b9wwp"] Feb 03 10:35:40 crc kubenswrapper[5010]: I0203 10:35:40.519759 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acc33e7-f3ae-4131-a003-aa6b592269c6" path="/var/lib/kubelet/pods/1acc33e7-f3ae-4131-a003-aa6b592269c6/volumes" Feb 03 10:35:46 crc kubenswrapper[5010]: I0203 10:35:46.390644 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:35:46 crc kubenswrapper[5010]: I0203 10:35:46.392024 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:05 crc kubenswrapper[5010]: I0203 10:36:05.994206 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:05 crc kubenswrapper[5010]: I0203 10:36:05.999053 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.013568 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086691 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086796 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.086967 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190620 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.190765 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.191277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.191542 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.218407 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"certified-operators-9jqtw\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.327313 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:06 crc kubenswrapper[5010]: I0203 10:36:06.980774 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855310 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" exitCode=0 Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4"} Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.855439 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerStarted","Data":"30462e26e895913aeae7a24f7294d049662d3489ceed1084bbd282871696eac4"} Feb 03 10:36:07 crc kubenswrapper[5010]: I0203 10:36:07.858984 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:36:09 crc kubenswrapper[5010]: I0203 10:36:09.880976 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" exitCode=0 Feb 03 10:36:09 crc kubenswrapper[5010]: I0203 10:36:09.881078 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0"} Feb 03 10:36:10 crc kubenswrapper[5010]: I0203 10:36:10.896034 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerStarted","Data":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} Feb 03 10:36:10 crc kubenswrapper[5010]: I0203 10:36:10.924246 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jqtw" podStartSLOduration=3.433578863 podStartE2EDuration="5.924190314s" podCreationTimestamp="2026-02-03 10:36:05 +0000 UTC" firstStartedPulling="2026-02-03 10:36:07.858563116 +0000 UTC m=+2038.014539245" lastFinishedPulling="2026-02-03 10:36:10.349174567 +0000 UTC m=+2040.505150696" observedRunningTime="2026-02-03 10:36:10.915981753 +0000 UTC m=+2041.071957892" watchObservedRunningTime="2026-02-03 10:36:10.924190314 +0000 UTC m=+2041.080166453" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.329006 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.329911 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390283 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390554 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.390608 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.808791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:16 crc kubenswrapper[5010]: I0203 10:36:16.867867 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:18 crc kubenswrapper[5010]: I0203 10:36:18.813852 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jqtw" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" containerID="cri-o://fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" gracePeriod=2 Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.358339 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521053 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521292 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.521431 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") pod \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\" (UID: \"de98348e-d7aa-4a70-ba6f-8fbe414be6e4\") " Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.522562 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities" (OuterVolumeSpecName: "utilities") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.537693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr" (OuterVolumeSpecName: "kube-api-access-jlnlr") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "kube-api-access-jlnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.584205 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de98348e-d7aa-4a70-ba6f-8fbe414be6e4" (UID: "de98348e-d7aa-4a70-ba6f-8fbe414be6e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624475 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624529 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.624545 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlnlr\" (UniqueName: \"kubernetes.io/projected/de98348e-d7aa-4a70-ba6f-8fbe414be6e4-kube-api-access-jlnlr\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828530 5010 generic.go:334] "Generic (PLEG): container finished" podID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" exitCode=0 Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828601 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828619 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jqtw" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828642 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jqtw" event={"ID":"de98348e-d7aa-4a70-ba6f-8fbe414be6e4","Type":"ContainerDied","Data":"30462e26e895913aeae7a24f7294d049662d3489ceed1084bbd282871696eac4"} Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.828666 5010 scope.go:117] "RemoveContainer" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.873201 5010 scope.go:117] "RemoveContainer" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.878897 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.889362 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jqtw"] Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.902122 5010 scope.go:117] "RemoveContainer" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.962899 5010 scope.go:117] "RemoveContainer" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.964479 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": container with ID starting with fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952 not found: ID does not exist" containerID="fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.964544 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952"} err="failed to get container status \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": rpc error: code = NotFound desc = could not find container \"fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952\": container with ID starting with fea5b45f8ea17ca0fb6ddf89198f4aeb656aeec5c6f707e632c0393284c1b952 not found: ID does not exist" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.964580 5010 scope.go:117] "RemoveContainer" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.965321 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": container with ID starting with 9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0 not found: ID does not exist" containerID="9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.965363 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0"} err="failed to get container status \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": rpc error: code = NotFound desc = could not find container \"9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0\": container with ID starting with 9856b4a8ab6cd5521f3ecadd2c6de5ebc5f1bca491ed9a2f1088a081b22be4f0 not found: ID does not exist" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.965389 5010 scope.go:117] "RemoveContainer" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: E0203 10:36:19.966053 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": container with ID starting with 268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4 not found: ID does not exist" containerID="268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4" Feb 03 10:36:19 crc kubenswrapper[5010]: I0203 10:36:19.966087 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4"} err="failed to get container status \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": rpc error: code = NotFound desc = could not find container \"268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4\": container with ID starting with 268b25785e08a14766b846b60aaaca34bd6ab51f32a96303638926cb78db2ee4 not found: ID does not exist" Feb 03 10:36:20 crc kubenswrapper[5010]: I0203 10:36:20.515136 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" path="/var/lib/kubelet/pods/de98348e-d7aa-4a70-ba6f-8fbe414be6e4/volumes" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.453172 5010 scope.go:117] "RemoveContainer" containerID="90f279a47e6694b954d6224d0a36d83bb292142a861407bbd952b7ac0f3f1940" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.523752 5010 scope.go:117] "RemoveContainer" containerID="56c4bc07b47d992164c95f2c4bc219b10e3ec8444d085ea923e9fc23515c64b1" Feb 03 10:36:21 crc kubenswrapper[5010]: I0203 10:36:21.569637 5010 scope.go:117] "RemoveContainer" containerID="eec510d597d8f2314ae76e8de6136bb5224447e6e83068a025a8dfed4080a04f" Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.090694 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.109531 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.121713 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.132923 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.144261 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.157462 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.170157 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-46aa-account-create-update-5gs9h"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.183252 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qnsrk"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.192172 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dq6kw"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.203490 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d58b-account-create-update-p69h5"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.213601 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fztcs"] Feb 03 10:36:29 crc kubenswrapper[5010]: I0203 10:36:29.222656 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c6bf-account-create-update-9xrwr"] Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.534529 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="122231ac-5000-44d7-a524-2df85da0abd4" path="/var/lib/kubelet/pods/122231ac-5000-44d7-a524-2df85da0abd4/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.536398 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19aa5f54-6733-454e-a1cf-92ba62fc4068" path="/var/lib/kubelet/pods/19aa5f54-6733-454e-a1cf-92ba62fc4068/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.537175 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fff59b-fc6c-46b2-9cb6-9ad352b4e39c" path="/var/lib/kubelet/pods/26fff59b-fc6c-46b2-9cb6-9ad352b4e39c/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.538045 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307672c5-ae66-4af2-bbbb-1a59c58ee4b2" path="/var/lib/kubelet/pods/307672c5-ae66-4af2-bbbb-1a59c58ee4b2/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.541146 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fac5d19-4577-4190-b626-83d0b42fd46d" path="/var/lib/kubelet/pods/6fac5d19-4577-4190-b626-83d0b42fd46d/volumes" Feb 03 10:36:30 crc kubenswrapper[5010]: I0203 10:36:30.542446 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab88b93-9009-49d9-8967-dc8f2b9a7244" path="/var/lib/kubelet/pods/cab88b93-9009-49d9-8967-dc8f2b9a7244/volumes" Feb 03 10:36:34 crc kubenswrapper[5010]: I0203 10:36:34.710682 5010 generic.go:334] "Generic (PLEG): container finished" podID="efb76028-3500-476c-adef-dfc87d2cdab7" containerID="a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af" exitCode=0 Feb 03 10:36:34 crc kubenswrapper[5010]: I0203 10:36:34.711371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerDied","Data":"a19b497c7c28c9ee6e75c3ef4fc8cf01ad5e203dac29a52316b01db981be31af"} Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.735291 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" event={"ID":"efb76028-3500-476c-adef-dfc87d2cdab7","Type":"ContainerDied","Data":"a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78"} Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.737157 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c375690fa1ec40eef647be11edc8538fbedd2b8d427496a33c1527d4387b78" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.845873 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880203 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880313 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.880467 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") pod \"efb76028-3500-476c-adef-dfc87d2cdab7\" (UID: \"efb76028-3500-476c-adef-dfc87d2cdab7\") " Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.901759 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc" (OuterVolumeSpecName: "kube-api-access-kd7qc") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "kube-api-access-kd7qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.913708 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.940923 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory" (OuterVolumeSpecName: "inventory") pod "efb76028-3500-476c-adef-dfc87d2cdab7" (UID: "efb76028-3500-476c-adef-dfc87d2cdab7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983264 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983323 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efb76028-3500-476c-adef-dfc87d2cdab7-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:36 crc kubenswrapper[5010]: I0203 10:36:36.983339 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7qc\" (UniqueName: \"kubernetes.io/projected/efb76028-3500-476c-adef-dfc87d2cdab7-kube-api-access-kd7qc\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:37 crc kubenswrapper[5010]: I0203 10:36:37.746475 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5tffc" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.324720 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.325820 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.325917 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326019 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-utilities" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326076 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-utilities" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326134 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-content" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326184 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="extract-content" Feb 03 10:36:38 crc kubenswrapper[5010]: E0203 10:36:38.326269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326336 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326660 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb76028-3500-476c-adef-dfc87d2cdab7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.326765 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98348e-d7aa-4a70-ba6f-8fbe414be6e4" containerName="registry-server" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.327850 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332513 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332572 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332829 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.332935 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.356099 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404759 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404911 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.404951 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507344 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507496 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.507546 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.523770 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.523792 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.528661 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:38 crc kubenswrapper[5010]: I0203 10:36:38.657177 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:39 crc kubenswrapper[5010]: I0203 10:36:39.050396 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7"] Feb 03 10:36:39 crc kubenswrapper[5010]: I0203 10:36:39.892893 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerStarted","Data":"45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be"} Feb 03 10:36:40 crc kubenswrapper[5010]: I0203 10:36:40.922229 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerStarted","Data":"434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df"} Feb 03 10:36:40 crc kubenswrapper[5010]: I0203 10:36:40.949128 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" podStartSLOduration=2.094395483 podStartE2EDuration="2.949102172s" podCreationTimestamp="2026-02-03 10:36:38 +0000 UTC" firstStartedPulling="2026-02-03 10:36:39.063919946 +0000 UTC m=+2069.219896075" lastFinishedPulling="2026-02-03 10:36:39.918626635 +0000 UTC m=+2070.074602764" observedRunningTime="2026-02-03 10:36:40.94200677 +0000 UTC m=+2071.097982899" watchObservedRunningTime="2026-02-03 10:36:40.949102172 +0000 UTC m=+2071.105078291" Feb 03 10:36:45 crc kubenswrapper[5010]: I0203 10:36:45.975817 5010 generic.go:334] "Generic (PLEG): container finished" podID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerID="434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df" exitCode=0 Feb 03 10:36:45 crc kubenswrapper[5010]: I0203 10:36:45.975906 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerDied","Data":"434b05c94a108a94b87c9d056e86bd10915d2cd379e072c24caeee7d45d989df"} Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391034 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391149 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.391276 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.392510 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:36:46 crc kubenswrapper[5010]: I0203 10:36:46.392590 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" gracePeriod=600 Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016490 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" exitCode=0 Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016695 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0"} Feb 03 10:36:47 crc kubenswrapper[5010]: I0203 10:36:47.016959 5010 scope.go:117] "RemoveContainer" containerID="0b2959383eeccddbbf25124f42df447fcb4163e7a703e3c12933d7f18393d3c1" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.025040 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033774 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" event={"ID":"3109739d-69b7-439a-b6c4-a8affbe0af4f","Type":"ContainerDied","Data":"45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be"} Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033836 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45737c1cb8e9fea582eea7ed2cd21ed4f6a6d67483896231864db2a1599dc0be" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.033943 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.036055 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158449 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.158534 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") pod \"3109739d-69b7-439a-b6c4-a8affbe0af4f\" (UID: \"3109739d-69b7-439a-b6c4-a8affbe0af4f\") " Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.165685 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb" (OuterVolumeSpecName: "kube-api-access-5jlcb") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "kube-api-access-5jlcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.193562 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory" (OuterVolumeSpecName: "inventory") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.202665 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3109739d-69b7-439a-b6c4-a8affbe0af4f" (UID: "3109739d-69b7-439a-b6c4-a8affbe0af4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.262969 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.263026 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3109739d-69b7-439a-b6c4-a8affbe0af4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:48 crc kubenswrapper[5010]: I0203 10:36:48.263044 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jlcb\" (UniqueName: \"kubernetes.io/projected/3109739d-69b7-439a-b6c4-a8affbe0af4f-kube-api-access-5jlcb\") on node \"crc\" DevicePath \"\"" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.134307 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:49 crc kubenswrapper[5010]: E0203 10:36:49.135086 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.135104 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.135324 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3109739d-69b7-439a-b6c4-a8affbe0af4f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.136100 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.138677 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.138941 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.139315 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.139497 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.170967 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290581 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290680 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.290747 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393184 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393303 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.393375 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.412714 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.412859 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.417455 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hz8vx\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:49 crc kubenswrapper[5010]: I0203 10:36:49.464301 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:36:50 crc kubenswrapper[5010]: I0203 10:36:50.112747 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx"] Feb 03 10:36:51 crc kubenswrapper[5010]: I0203 10:36:51.070990 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerStarted","Data":"8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb"} Feb 03 10:36:51 crc kubenswrapper[5010]: I0203 10:36:51.097539 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" podStartSLOduration=1.412546588 podStartE2EDuration="2.097499832s" podCreationTimestamp="2026-02-03 10:36:49 +0000 UTC" firstStartedPulling="2026-02-03 10:36:50.129112437 +0000 UTC m=+2080.285088566" lastFinishedPulling="2026-02-03 10:36:50.814065681 +0000 UTC m=+2080.970041810" observedRunningTime="2026-02-03 10:36:51.091862329 +0000 UTC m=+2081.247838468" watchObservedRunningTime="2026-02-03 10:36:51.097499832 +0000 UTC m=+2081.253475981" Feb 03 10:36:52 crc kubenswrapper[5010]: I0203 10:36:52.083364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerStarted","Data":"5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7"} Feb 03 10:37:09 crc kubenswrapper[5010]: I0203 10:37:09.057597 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:37:09 crc kubenswrapper[5010]: I0203 10:37:09.068745 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gd6dz"] Feb 03 10:37:10 crc kubenswrapper[5010]: I0203 10:37:10.520700 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ca9130-4a3c-4c64-8557-5c5e29df551d" path="/var/lib/kubelet/pods/49ca9130-4a3c-4c64-8557-5c5e29df551d/volumes" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.746266 5010 scope.go:117] "RemoveContainer" containerID="4927cc4be235478029139ce32f036f214b152852871af562859aac3f62d37796" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.782788 5010 scope.go:117] "RemoveContainer" containerID="529624536a7c99d14d746a21069148e69bbb624ecc0d005496493ce4e1241033" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.862634 5010 scope.go:117] "RemoveContainer" containerID="a966998f1e0d5c656c412830d78b6e892d7c7c270d9300eb5f417be99b11fe63" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.901474 5010 scope.go:117] "RemoveContainer" containerID="279c8b5f461c06f3191fbc6bb211d5d862c782efbbff978992257a86dd9152d3" Feb 03 10:37:21 crc kubenswrapper[5010]: I0203 10:37:21.956578 5010 scope.go:117] "RemoveContainer" containerID="481559434a2d42e2a028cba399231b55666506a6320e8ddbe78f4de71650ba33" Feb 03 10:37:22 crc kubenswrapper[5010]: I0203 10:37:22.043092 5010 scope.go:117] "RemoveContainer" containerID="277036577a9bb8f26bb26efd4d33210a114ebacd0ae43e4abbbdfbe425f61dd5" Feb 03 10:37:22 crc kubenswrapper[5010]: I0203 10:37:22.078652 5010 scope.go:117] "RemoveContainer" containerID="48902a83c43af8a62b4d6b968a8b3ca68e0101eb2b41fc6cd1fdf99dd7be0466" Feb 03 10:37:28 crc kubenswrapper[5010]: I0203 10:37:28.477125 5010 generic.go:334] "Generic (PLEG): container finished" podID="49056616-86cd-41cd-a102-1072dc2a79f4" containerID="5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7" exitCode=0 Feb 03 10:37:28 crc kubenswrapper[5010]: I0203 10:37:28.477207 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerDied","Data":"5dd8dd8cf6f829db6c31eb69931ea79632501cf4010715f37a3bb745083ad4c7"} Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.050687 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153084 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153328 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.153512 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") pod \"49056616-86cd-41cd-a102-1072dc2a79f4\" (UID: \"49056616-86cd-41cd-a102-1072dc2a79f4\") " Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.162581 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg" (OuterVolumeSpecName: "kube-api-access-2sklg") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "kube-api-access-2sklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.190520 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory" (OuterVolumeSpecName: "inventory") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.190723 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49056616-86cd-41cd-a102-1072dc2a79f4" (UID: "49056616-86cd-41cd-a102-1072dc2a79f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256538 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256593 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49056616-86cd-41cd-a102-1072dc2a79f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.256606 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sklg\" (UniqueName: \"kubernetes.io/projected/49056616-86cd-41cd-a102-1072dc2a79f4-kube-api-access-2sklg\") on node \"crc\" DevicePath \"\"" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500413 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" event={"ID":"49056616-86cd-41cd-a102-1072dc2a79f4","Type":"ContainerDied","Data":"8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb"} Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500766 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceab44a914b6581fca750f970dc22a5a0859a72d8fff8bc1ebf38c9e4bf8adb" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.500476 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hz8vx" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.699495 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:30 crc kubenswrapper[5010]: E0203 10:37:30.700007 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.700041 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.700345 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="49056616-86cd-41cd-a102-1072dc2a79f4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.701159 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703522 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703739 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.703878 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.704054 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.715763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777361 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.777946 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879108 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879236 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.879281 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.885493 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.886934 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:30 crc kubenswrapper[5010]: I0203 10:37:30.900265 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ktk67\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:31 crc kubenswrapper[5010]: I0203 10:37:31.024948 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:37:31 crc kubenswrapper[5010]: I0203 10:37:31.599176 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67"] Feb 03 10:37:32 crc kubenswrapper[5010]: I0203 10:37:32.527454 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerStarted","Data":"1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4"} Feb 03 10:37:33 crc kubenswrapper[5010]: I0203 10:37:33.544058 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerStarted","Data":"adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c"} Feb 03 10:37:33 crc kubenswrapper[5010]: I0203 10:37:33.574554 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" podStartSLOduration=2.995727611 podStartE2EDuration="3.574516473s" podCreationTimestamp="2026-02-03 10:37:30 +0000 UTC" firstStartedPulling="2026-02-03 10:37:31.61576373 +0000 UTC m=+2121.771739859" lastFinishedPulling="2026-02-03 10:37:32.194552592 +0000 UTC m=+2122.350528721" observedRunningTime="2026-02-03 10:37:33.564621353 +0000 UTC m=+2123.720597492" watchObservedRunningTime="2026-02-03 10:37:33.574516473 +0000 UTC m=+2123.730492612" Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.056635 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.070465 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.080206 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zwnxk"] Feb 03 10:37:37 crc kubenswrapper[5010]: I0203 10:37:37.101202 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bqztf"] Feb 03 10:37:38 crc kubenswrapper[5010]: I0203 10:37:38.515215 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726ff8cb-3f2f-41a6-a61e-a79ed194505f" path="/var/lib/kubelet/pods/726ff8cb-3f2f-41a6-a61e-a79ed194505f/volumes" Feb 03 10:37:38 crc kubenswrapper[5010]: I0203 10:37:38.516094 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd352716-06a1-47da-9d5d-179bfed70cbe" path="/var/lib/kubelet/pods/bd352716-06a1-47da-9d5d-179bfed70cbe/volumes" Feb 03 10:38:15 crc kubenswrapper[5010]: I0203 10:38:15.953587 5010 generic.go:334] "Generic (PLEG): container finished" podID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerID="adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c" exitCode=0 Feb 03 10:38:15 crc kubenswrapper[5010]: I0203 10:38:15.953705 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerDied","Data":"adefada3395e7a33a2ffaa57c7dcc19ebdacf1eb1ed1e00a028b8ec6c747216c"} Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.408973 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.440930 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.440984 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.442279 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") pod \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\" (UID: \"f4e7c571-ff51-496f-81b8-2fee3f357d3f\") " Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.449735 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm" (OuterVolumeSpecName: "kube-api-access-fsksm") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "kube-api-access-fsksm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.479790 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory" (OuterVolumeSpecName: "inventory") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.487638 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4e7c571-ff51-496f-81b8-2fee3f357d3f" (UID: "f4e7c571-ff51-496f-81b8-2fee3f357d3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.545628 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.548309 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsksm\" (UniqueName: \"kubernetes.io/projected/f4e7c571-ff51-496f-81b8-2fee3f357d3f-kube-api-access-fsksm\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.548373 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4e7c571-ff51-496f-81b8-2fee3f357d3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974869 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" event={"ID":"f4e7c571-ff51-496f-81b8-2fee3f357d3f","Type":"ContainerDied","Data":"1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4"} Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974924 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1260438c118656fe4e67ffda841b44ea9f435d72463d4392e2d1bc79c2b65cc4" Feb 03 10:38:17 crc kubenswrapper[5010]: I0203 10:38:17.974956 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ktk67" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.082538 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:18 crc kubenswrapper[5010]: E0203 10:38:18.083239 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.083266 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.083660 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e7c571-ff51-496f-81b8-2fee3f357d3f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.084699 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.087856 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088121 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088263 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.088821 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.098393 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.159837 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.159913 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.160206 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262498 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262708 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.262775 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.268130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.271589 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.287596 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"ssh-known-hosts-edpm-deployment-pfhx5\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:18 crc kubenswrapper[5010]: I0203 10:38:18.447412 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.002025 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pfhx5"] Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.998744 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerStarted","Data":"ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801"} Feb 03 10:38:19 crc kubenswrapper[5010]: I0203 10:38:19.999389 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerStarted","Data":"16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25"} Feb 03 10:38:20 crc kubenswrapper[5010]: I0203 10:38:20.023297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" podStartSLOduration=1.562697649 podStartE2EDuration="2.023264863s" podCreationTimestamp="2026-02-03 10:38:18 +0000 UTC" firstStartedPulling="2026-02-03 10:38:19.015453064 +0000 UTC m=+2169.171429193" lastFinishedPulling="2026-02-03 10:38:19.476020278 +0000 UTC m=+2169.631996407" observedRunningTime="2026-02-03 10:38:20.014173574 +0000 UTC m=+2170.170149723" watchObservedRunningTime="2026-02-03 10:38:20.023264863 +0000 UTC m=+2170.179241012" Feb 03 10:38:21 crc kubenswrapper[5010]: I0203 10:38:21.052501 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:38:21 crc kubenswrapper[5010]: I0203 10:38:21.068686 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fmn8g"] Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.271792 5010 scope.go:117] "RemoveContainer" containerID="9df92dcb078ed6d52131766accb050ab09c268253b0a5a65b5f79c4623de44a8" Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.340549 5010 scope.go:117] "RemoveContainer" containerID="9ad6b084a459424fdad0649a5c871c7f22695bf5efe4abdfaf37dff65c794a08" Feb 03 10:38:22 crc kubenswrapper[5010]: I0203 10:38:22.522194 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900a4dd0-c8e2-4416-9a0e-8fff95a5053b" path="/var/lib/kubelet/pods/900a4dd0-c8e2-4416-9a0e-8fff95a5053b/volumes" Feb 03 10:38:27 crc kubenswrapper[5010]: I0203 10:38:27.072329 5010 generic.go:334] "Generic (PLEG): container finished" podID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerID="ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801" exitCode=0 Feb 03 10:38:27 crc kubenswrapper[5010]: I0203 10:38:27.072415 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerDied","Data":"ad84f868170059a7ab2556c16e048551198df5d6e32880c0413f7f752b820801"} Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.554156 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614486 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614805 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.614920 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") pod \"67a7675c-9074-4390-85ab-2bba845b2dc0\" (UID: \"67a7675c-9074-4390-85ab-2bba845b2dc0\") " Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.637840 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv" (OuterVolumeSpecName: "kube-api-access-dffpv") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "kube-api-access-dffpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.652399 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.655172 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67a7675c-9074-4390-85ab-2bba845b2dc0" (UID: "67a7675c-9074-4390-85ab-2bba845b2dc0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718495 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dffpv\" (UniqueName: \"kubernetes.io/projected/67a7675c-9074-4390-85ab-2bba845b2dc0-kube-api-access-dffpv\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718922 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:28 crc kubenswrapper[5010]: I0203 10:38:28.718936 5010 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/67a7675c-9074-4390-85ab-2bba845b2dc0-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092289 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" event={"ID":"67a7675c-9074-4390-85ab-2bba845b2dc0","Type":"ContainerDied","Data":"16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25"} Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092353 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cfb70c1a01a3b03fa245d03b25ae9e33090c913660087a2c06e2a10bb68b25" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.092391 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pfhx5" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.196876 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:29 crc kubenswrapper[5010]: E0203 10:38:29.197584 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.197615 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.197859 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7675c-9074-4390-85ab-2bba845b2dc0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.198843 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.202874 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203041 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.203303 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.211980 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.335492 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.335995 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.336138 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438387 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438799 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.438903 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.443646 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.444070 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.471049 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nm955\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:29 crc kubenswrapper[5010]: I0203 10:38:29.522014 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:30 crc kubenswrapper[5010]: I0203 10:38:30.092680 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955"] Feb 03 10:38:30 crc kubenswrapper[5010]: I0203 10:38:30.106264 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerStarted","Data":"3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811"} Feb 03 10:38:31 crc kubenswrapper[5010]: I0203 10:38:31.121626 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerStarted","Data":"a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf"} Feb 03 10:38:31 crc kubenswrapper[5010]: I0203 10:38:31.145841 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" podStartSLOduration=1.711113717 podStartE2EDuration="2.145810988s" podCreationTimestamp="2026-02-03 10:38:29 +0000 UTC" firstStartedPulling="2026-02-03 10:38:30.093443352 +0000 UTC m=+2180.249419481" lastFinishedPulling="2026-02-03 10:38:30.528140623 +0000 UTC m=+2180.684116752" observedRunningTime="2026-02-03 10:38:31.143206412 +0000 UTC m=+2181.299182551" watchObservedRunningTime="2026-02-03 10:38:31.145810988 +0000 UTC m=+2181.301787127" Feb 03 10:38:39 crc kubenswrapper[5010]: I0203 10:38:39.214663 5010 generic.go:334] "Generic (PLEG): container finished" podID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerID="a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf" exitCode=0 Feb 03 10:38:39 crc kubenswrapper[5010]: I0203 10:38:39.214747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerDied","Data":"a2c1a089ffb9018c1598744774eeab67fd4a670e32068961d30cfdacfb7003cf"} Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.671322 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697377 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697533 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.697713 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") pod \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\" (UID: \"a9fa7d27-81da-4dcd-adef-cb22c35d2641\") " Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.707941 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd" (OuterVolumeSpecName: "kube-api-access-g95dd") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "kube-api-access-g95dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.734255 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.734722 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory" (OuterVolumeSpecName: "inventory") pod "a9fa7d27-81da-4dcd-adef-cb22c35d2641" (UID: "a9fa7d27-81da-4dcd-adef-cb22c35d2641"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801309 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95dd\" (UniqueName: \"kubernetes.io/projected/a9fa7d27-81da-4dcd-adef-cb22c35d2641-kube-api-access-g95dd\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801359 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:40 crc kubenswrapper[5010]: I0203 10:38:40.801383 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9fa7d27-81da-4dcd-adef-cb22c35d2641-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245344 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" event={"ID":"a9fa7d27-81da-4dcd-adef-cb22c35d2641","Type":"ContainerDied","Data":"3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811"} Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245403 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f547aa3ae89e8ad869fa80f68d0d92a3b533f4502565adfe14ea21576437811" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.245437 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nm955" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.348585 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:41 crc kubenswrapper[5010]: E0203 10:38:41.349303 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.349333 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.349647 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fa7d27-81da-4dcd-adef-cb22c35d2641" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.350750 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.354974 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.355429 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357046 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357297 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.357528 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416097 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416494 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.416759 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519768 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.519855 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.525112 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.527015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.541963 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:41 crc kubenswrapper[5010]: I0203 10:38:41.679413 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:42 crc kubenswrapper[5010]: I0203 10:38:42.295110 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt"] Feb 03 10:38:42 crc kubenswrapper[5010]: W0203 10:38:42.302396 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4357ef1_04ea_4dbd_acd8_70f34a5a72a1.slice/crio-eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f WatchSource:0}: Error finding container eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f: Status 404 returned error can't find the container with id eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.267025 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerStarted","Data":"6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e"} Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.267094 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerStarted","Data":"eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f"} Feb 03 10:38:43 crc kubenswrapper[5010]: I0203 10:38:43.299498 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" podStartSLOduration=1.832571597 podStartE2EDuration="2.299459882s" podCreationTimestamp="2026-02-03 10:38:41 +0000 UTC" firstStartedPulling="2026-02-03 10:38:42.307350048 +0000 UTC m=+2192.463326177" lastFinishedPulling="2026-02-03 10:38:42.774238333 +0000 UTC m=+2192.930214462" observedRunningTime="2026-02-03 10:38:43.288804582 +0000 UTC m=+2193.444780721" watchObservedRunningTime="2026-02-03 10:38:43.299459882 +0000 UTC m=+2193.455436021" Feb 03 10:38:46 crc kubenswrapper[5010]: I0203 10:38:46.393203 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:38:46 crc kubenswrapper[5010]: I0203 10:38:46.394452 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:38:53 crc kubenswrapper[5010]: I0203 10:38:53.374846 5010 generic.go:334] "Generic (PLEG): container finished" podID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerID="6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e" exitCode=0 Feb 03 10:38:53 crc kubenswrapper[5010]: I0203 10:38:53.374953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerDied","Data":"6c7d133f60ff286a66264a98b7f12f03aac4dfb882e4add0318c4b41c3b61c5e"} Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.839410 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.960424 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.961011 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.961267 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") pod \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\" (UID: \"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1\") " Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.968383 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt" (OuterVolumeSpecName: "kube-api-access-v98gt") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "kube-api-access-v98gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:38:54 crc kubenswrapper[5010]: I0203 10:38:54.999573 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.015347 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory" (OuterVolumeSpecName: "inventory") pod "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" (UID: "d4357ef1-04ea-4dbd-acd8-70f34a5a72a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065564 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98gt\" (UniqueName: \"kubernetes.io/projected/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-kube-api-access-v98gt\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065622 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.065669 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4357ef1-04ea-4dbd-acd8-70f34a5a72a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.397719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" event={"ID":"d4357ef1-04ea-4dbd-acd8-70f34a5a72a1","Type":"ContainerDied","Data":"eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f"} Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.398294 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb60583b8ef340e99d80b80a3479341611c17448436cb30d55be356059ffb49f" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.398394 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.585277 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:55 crc kubenswrapper[5010]: E0203 10:38:55.586419 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.586529 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.586958 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4357ef1-04ea-4dbd-acd8-70f34a5a72a1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.588232 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.594135 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.594190 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595288 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595370 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595380 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.595757 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.597319 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.598518 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.605785 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783267 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783366 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783424 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783668 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.783829 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784013 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784081 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784383 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784460 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784674 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784790 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784881 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.784957 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.886994 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887065 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887137 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887174 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887252 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887282 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887322 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887360 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887406 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887445 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887484 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887528 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.887552 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.888118 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895581 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895587 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895607 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895700 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.895908 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.896592 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.897089 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.897807 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898395 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898498 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.901128 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.902651 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.898502 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.906188 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-msc5t\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:55 crc kubenswrapper[5010]: I0203 10:38:55.914191 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:38:56 crc kubenswrapper[5010]: I0203 10:38:56.539336 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t"] Feb 03 10:38:57 crc kubenswrapper[5010]: I0203 10:38:57.431117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerStarted","Data":"63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f"} Feb 03 10:38:58 crc kubenswrapper[5010]: I0203 10:38:58.442306 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerStarted","Data":"9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae"} Feb 03 10:38:58 crc kubenswrapper[5010]: I0203 10:38:58.474318 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" podStartSLOduration=2.79230515 podStartE2EDuration="3.474286879s" podCreationTimestamp="2026-02-03 10:38:55 +0000 UTC" firstStartedPulling="2026-02-03 10:38:56.54385561 +0000 UTC m=+2206.699831759" lastFinishedPulling="2026-02-03 10:38:57.225837359 +0000 UTC m=+2207.381813488" observedRunningTime="2026-02-03 10:38:58.465086876 +0000 UTC m=+2208.621063005" watchObservedRunningTime="2026-02-03 10:38:58.474286879 +0000 UTC m=+2208.630263018" Feb 03 10:39:16 crc kubenswrapper[5010]: I0203 10:39:16.390719 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:39:16 crc kubenswrapper[5010]: I0203 10:39:16.391853 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:39:22 crc kubenswrapper[5010]: I0203 10:39:22.452006 5010 scope.go:117] "RemoveContainer" containerID="79dc7129a99144c2e59b3fda9930b79947c9ac7a248d6f8abe7b85572f2f5ea2" Feb 03 10:39:32 crc kubenswrapper[5010]: I0203 10:39:32.842942 5010 generic.go:334] "Generic (PLEG): container finished" podID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerID="9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae" exitCode=0 Feb 03 10:39:32 crc kubenswrapper[5010]: I0203 10:39:32.843045 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerDied","Data":"9a318ac7fe459a01328aa8f01152357fffc9c775f7ce36af393d101490d5caae"} Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.314429 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379627 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379677 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379798 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379836 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379876 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379913 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.379968 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380073 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380098 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380131 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380201 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380369 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380397 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.380473 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") pod \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\" (UID: \"af6128d5-2369-4ef9-99aa-61ad0bf3b213\") " Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.390637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.390909 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391320 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391544 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.391799 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.393642 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.393786 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t" (OuterVolumeSpecName: "kube-api-access-bf48t") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "kube-api-access-bf48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.394299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.396676 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.396911 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.399184 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.410850 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.424663 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.434647 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory" (OuterVolumeSpecName: "inventory") pod "af6128d5-2369-4ef9-99aa-61ad0bf3b213" (UID: "af6128d5-2369-4ef9-99aa-61ad0bf3b213"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485088 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485140 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485155 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485168 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485178 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485188 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485198 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485225 5010 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485237 5010 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485246 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf48t\" (UniqueName: \"kubernetes.io/projected/af6128d5-2369-4ef9-99aa-61ad0bf3b213-kube-api-access-bf48t\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485254 5010 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485263 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485271 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.485282 5010 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6128d5-2369-4ef9-99aa-61ad0bf3b213-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868140 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" event={"ID":"af6128d5-2369-4ef9-99aa-61ad0bf3b213","Type":"ContainerDied","Data":"63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f"} Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868240 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63dca3b86ebc0bedc83753b112381678ddcf76ec0ef2ca15d3c8afd4ecbd5d8f" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.868616 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-msc5t" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.993223 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:34 crc kubenswrapper[5010]: E0203 10:39:34.993781 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.993803 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.994033 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6128d5-2369-4ef9-99aa-61ad0bf3b213" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.994891 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998138 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998344 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.998486 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:39:34 crc kubenswrapper[5010]: I0203 10:39:34.999016 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.000851 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.004702 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.103907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104179 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104275 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104362 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.104416 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207587 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207664 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207725 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207766 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.207803 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.210056 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.215510 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.217096 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.224293 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.230166 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-js9ms\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.314338 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:39:35 crc kubenswrapper[5010]: I0203 10:39:35.913858 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms"] Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.889564 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerStarted","Data":"3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4"} Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.891412 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerStarted","Data":"f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98"} Feb 03 10:39:36 crc kubenswrapper[5010]: I0203 10:39:36.917974 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" podStartSLOduration=2.48103193 podStartE2EDuration="2.917953288s" podCreationTimestamp="2026-02-03 10:39:34 +0000 UTC" firstStartedPulling="2026-02-03 10:39:35.927122276 +0000 UTC m=+2246.083098405" lastFinishedPulling="2026-02-03 10:39:36.364043634 +0000 UTC m=+2246.520019763" observedRunningTime="2026-02-03 10:39:36.912132171 +0000 UTC m=+2247.068108320" watchObservedRunningTime="2026-02-03 10:39:36.917953288 +0000 UTC m=+2247.073929417" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.392163 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.393192 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.393314 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.394956 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:39:46 crc kubenswrapper[5010]: I0203 10:39:46.395142 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" gracePeriod=600 Feb 03 10:39:46 crc kubenswrapper[5010]: E0203 10:39:46.533857 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018303 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" exitCode=0 Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018377 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca"} Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.018751 5010 scope.go:117] "RemoveContainer" containerID="5dc093ef0ed9c15b3f47adc87cdb7004279d6322628d13c278c955d2873bd2f0" Feb 03 10:39:47 crc kubenswrapper[5010]: I0203 10:39:47.019741 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:39:47 crc kubenswrapper[5010]: E0203 10:39:47.020060 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.420532 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.424760 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.445636 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.544659 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.545094 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.545256 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.648521 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.648966 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649088 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649412 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.649978 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.673166 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"redhat-operators-7nbtm\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:55 crc kubenswrapper[5010]: I0203 10:39:55.751899 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:39:56 crc kubenswrapper[5010]: I0203 10:39:56.235230 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.142578 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" exitCode=0 Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.142719 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4"} Feb 03 10:39:57 crc kubenswrapper[5010]: I0203 10:39:57.143062 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"9bc445c008eaa6b813b2b4224ac9fac4cd84d22c820ce73495cef261f897be92"} Feb 03 10:39:58 crc kubenswrapper[5010]: I0203 10:39:58.157366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} Feb 03 10:40:01 crc kubenswrapper[5010]: I0203 10:40:01.191931 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" exitCode=0 Feb 03 10:40:01 crc kubenswrapper[5010]: I0203 10:40:01.192323 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.217321 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerStarted","Data":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.244346 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7nbtm" podStartSLOduration=2.673857046 podStartE2EDuration="7.244321879s" podCreationTimestamp="2026-02-03 10:39:55 +0000 UTC" firstStartedPulling="2026-02-03 10:39:57.14634947 +0000 UTC m=+2267.302325599" lastFinishedPulling="2026-02-03 10:40:01.716814293 +0000 UTC m=+2271.872790432" observedRunningTime="2026-02-03 10:40:02.24198364 +0000 UTC m=+2272.397959779" watchObservedRunningTime="2026-02-03 10:40:02.244321879 +0000 UTC m=+2272.400298028" Feb 03 10:40:02 crc kubenswrapper[5010]: I0203 10:40:02.502974 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:02 crc kubenswrapper[5010]: E0203 10:40:02.503289 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:05 crc kubenswrapper[5010]: I0203 10:40:05.752170 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:05 crc kubenswrapper[5010]: I0203 10:40:05.752708 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:06 crc kubenswrapper[5010]: I0203 10:40:06.807160 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7nbtm" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" probeResult="failure" output=< Feb 03 10:40:06 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:40:06 crc kubenswrapper[5010]: > Feb 03 10:40:14 crc kubenswrapper[5010]: I0203 10:40:14.502734 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:14 crc kubenswrapper[5010]: E0203 10:40:14.504123 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:15 crc kubenswrapper[5010]: I0203 10:40:15.804681 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:15 crc kubenswrapper[5010]: I0203 10:40:15.867278 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:16 crc kubenswrapper[5010]: I0203 10:40:16.047502 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:17 crc kubenswrapper[5010]: I0203 10:40:17.379612 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7nbtm" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" containerID="cri-o://dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" gracePeriod=2 Feb 03 10:40:17 crc kubenswrapper[5010]: E0203 10:40:17.781329 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f99b9bf_8e73_486e_9a15_bb92116cfcf2.slice/crio-conmon-dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f99b9bf_8e73_486e_9a15_bb92116cfcf2.slice/crio-dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.189032 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337387 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337563 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.337628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") pod \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\" (UID: \"2f99b9bf-8e73-486e-9a15-bb92116cfcf2\") " Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.338175 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities" (OuterVolumeSpecName: "utilities") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.338399 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.344233 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh" (OuterVolumeSpecName: "kube-api-access-vsvrh") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "kube-api-access-vsvrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394295 5010 generic.go:334] "Generic (PLEG): container finished" podID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" exitCode=0 Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394361 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394403 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nbtm" event={"ID":"2f99b9bf-8e73-486e-9a15-bb92116cfcf2","Type":"ContainerDied","Data":"9bc445c008eaa6b813b2b4224ac9fac4cd84d22c820ce73495cef261f897be92"} Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394400 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nbtm" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.394422 5010 scope.go:117] "RemoveContainer" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.423590 5010 scope.go:117] "RemoveContainer" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.441910 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsvrh\" (UniqueName: \"kubernetes.io/projected/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-kube-api-access-vsvrh\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.456842 5010 scope.go:117] "RemoveContainer" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.555192 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f99b9bf-8e73-486e-9a15-bb92116cfcf2" (UID: "2f99b9bf-8e73-486e-9a15-bb92116cfcf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.563302 5010 scope.go:117] "RemoveContainer" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.565874 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": container with ID starting with dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8 not found: ID does not exist" containerID="dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.565945 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8"} err="failed to get container status \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": rpc error: code = NotFound desc = could not find container \"dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8\": container with ID starting with dafe75dcac4f2ab43c58c9c4bb0d7b758261d8d0b5e759e47500e58ebf08b4b8 not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.565977 5010 scope.go:117] "RemoveContainer" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.571007 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": container with ID starting with a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe not found: ID does not exist" containerID="a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571079 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe"} err="failed to get container status \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": rpc error: code = NotFound desc = could not find container \"a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe\": container with ID starting with a4ca61c7bd90601b3161f840c9e12feceb991569a0a61cdd2c07e9c95a1fd2fe not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571119 5010 scope.go:117] "RemoveContainer" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: E0203 10:40:18.571741 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": container with ID starting with 0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4 not found: ID does not exist" containerID="0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.571826 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4"} err="failed to get container status \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": rpc error: code = NotFound desc = could not find container \"0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4\": container with ID starting with 0493ec8700066e61af014a4570a9d9f8dd96811f6bbcbe5b09486b28fcdfc8b4 not found: ID does not exist" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.651454 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f99b9bf-8e73-486e-9a15-bb92116cfcf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.724506 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:18 crc kubenswrapper[5010]: I0203 10:40:18.733491 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7nbtm"] Feb 03 10:40:20 crc kubenswrapper[5010]: I0203 10:40:20.519612 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" path="/var/lib/kubelet/pods/2f99b9bf-8e73-486e-9a15-bb92116cfcf2/volumes" Feb 03 10:40:28 crc kubenswrapper[5010]: I0203 10:40:28.503010 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:28 crc kubenswrapper[5010]: E0203 10:40:28.503949 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:37 crc kubenswrapper[5010]: I0203 10:40:37.600753 5010 generic.go:334] "Generic (PLEG): container finished" podID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerID="3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4" exitCode=0 Feb 03 10:40:37 crc kubenswrapper[5010]: I0203 10:40:37.600880 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerDied","Data":"3f52d9d1e92e9e90ce0959d75ce4b497668740336daab15c8282bd36822b5df4"} Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.058257 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071399 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071471 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071497 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071554 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.071618 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") pod \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\" (UID: \"a3aac34b-fb9e-4853-9a1d-c311dc75f055\") " Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.094821 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69" (OuterVolumeSpecName: "kube-api-access-4xm69") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "kube-api-access-4xm69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.100151 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.118848 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.119163 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.124729 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory" (OuterVolumeSpecName: "inventory") pod "a3aac34b-fb9e-4853-9a1d-c311dc75f055" (UID: "a3aac34b-fb9e-4853-9a1d-c311dc75f055"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.174980 5010 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175032 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175050 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175066 5010 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3aac34b-fb9e-4853-9a1d-c311dc75f055-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.175081 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xm69\" (UniqueName: \"kubernetes.io/projected/a3aac34b-fb9e-4853-9a1d-c311dc75f055-kube-api-access-4xm69\") on node \"crc\" DevicePath \"\"" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631296 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" event={"ID":"a3aac34b-fb9e-4853-9a1d-c311dc75f055","Type":"ContainerDied","Data":"f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98"} Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631832 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f228078c9d3c1e62c32b6cff959cfdd12494b7ed083a2163851fad632fde6f98" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.631443 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-js9ms" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.752611 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753197 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-utilities" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753240 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-utilities" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753284 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753294 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753319 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753327 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: E0203 10:40:39.753339 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-content" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753347 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="extract-content" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753597 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f99b9bf-8e73-486e-9a15-bb92116cfcf2" containerName="registry-server" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.753638 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3aac34b-fb9e-4853-9a1d-c311dc75f055" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.754447 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.760909 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761596 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761675 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.761934 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.762073 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.773433 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.774970 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787665 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787730 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787762 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.787802 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.788102 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.788198 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890056 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890299 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890361 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890396 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890425 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.890452 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.897092 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.898065 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.898550 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.902560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.907360 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:39 crc kubenswrapper[5010]: I0203 10:40:39.915389 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.091865 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.509358 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:40 crc kubenswrapper[5010]: E0203 10:40:40.510309 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:40:40 crc kubenswrapper[5010]: I0203 10:40:40.677727 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p"] Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.651820 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerStarted","Data":"353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68"} Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.652371 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerStarted","Data":"c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1"} Feb 03 10:40:41 crc kubenswrapper[5010]: I0203 10:40:41.676816 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" podStartSLOduration=2.175466593 podStartE2EDuration="2.676795588s" podCreationTimestamp="2026-02-03 10:40:39 +0000 UTC" firstStartedPulling="2026-02-03 10:40:40.690824179 +0000 UTC m=+2310.846800308" lastFinishedPulling="2026-02-03 10:40:41.192153174 +0000 UTC m=+2311.348129303" observedRunningTime="2026-02-03 10:40:41.669318809 +0000 UTC m=+2311.825294938" watchObservedRunningTime="2026-02-03 10:40:41.676795588 +0000 UTC m=+2311.832771707" Feb 03 10:40:52 crc kubenswrapper[5010]: I0203 10:40:52.503479 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:40:52 crc kubenswrapper[5010]: E0203 10:40:52.504380 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:03 crc kubenswrapper[5010]: I0203 10:41:03.502862 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:03 crc kubenswrapper[5010]: E0203 10:41:03.504003 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:15 crc kubenswrapper[5010]: I0203 10:41:15.503052 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:15 crc kubenswrapper[5010]: E0203 10:41:15.504275 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:27 crc kubenswrapper[5010]: I0203 10:41:27.501919 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:27 crc kubenswrapper[5010]: E0203 10:41:27.502945 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:28 crc kubenswrapper[5010]: I0203 10:41:28.123850 5010 generic.go:334] "Generic (PLEG): container finished" podID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerID="353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68" exitCode=0 Feb 03 10:41:28 crc kubenswrapper[5010]: I0203 10:41:28.123885 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerDied","Data":"353e88e11bb683a6d69babb16cd3d7bdaabf21b7deb3b73ec560099bb2acad68"} Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.599072 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723380 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723443 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723750 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723789 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723892 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.723955 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") pod \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\" (UID: \"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e\") " Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.732574 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.740732 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk" (OuterVolumeSpecName: "kube-api-access-8ctpk") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "kube-api-access-8ctpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.760840 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.764182 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory" (OuterVolumeSpecName: "inventory") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.764643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.776703 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" (UID: "4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826520 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826571 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ctpk\" (UniqueName: \"kubernetes.io/projected/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-kube-api-access-8ctpk\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826593 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826609 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826626 5010 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:29 crc kubenswrapper[5010]: I0203 10:41:29.826642 5010 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.146656 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" event={"ID":"4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e","Type":"ContainerDied","Data":"c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1"} Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.147488 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85bd6b31d4790e41a050bcdc12b1527bf94989144fef23a23b08a1424662ce1" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.146708 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.242716 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:30 crc kubenswrapper[5010]: E0203 10:41:30.243363 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.243395 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.243692 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.244612 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.249964 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.250142 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.250623 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.252320 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.252620 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.255027 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.337673 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.337752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338721 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.338886 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442038 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442406 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442471 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442549 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.442579 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.448800 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.449467 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.451806 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.452294 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.465041 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:30 crc kubenswrapper[5010]: I0203 10:41:30.572933 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.125161 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d"] Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.128013 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:41:31 crc kubenswrapper[5010]: I0203 10:41:31.171054 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerStarted","Data":"27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba"} Feb 03 10:41:32 crc kubenswrapper[5010]: I0203 10:41:32.181818 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerStarted","Data":"dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47"} Feb 03 10:41:32 crc kubenswrapper[5010]: I0203 10:41:32.216500 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" podStartSLOduration=1.733853201 podStartE2EDuration="2.216475578s" podCreationTimestamp="2026-02-03 10:41:30 +0000 UTC" firstStartedPulling="2026-02-03 10:41:31.127713886 +0000 UTC m=+2361.283690015" lastFinishedPulling="2026-02-03 10:41:31.610336263 +0000 UTC m=+2361.766312392" observedRunningTime="2026-02-03 10:41:32.211501595 +0000 UTC m=+2362.367477724" watchObservedRunningTime="2026-02-03 10:41:32.216475578 +0000 UTC m=+2362.372451707" Feb 03 10:41:41 crc kubenswrapper[5010]: I0203 10:41:41.502780 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:41 crc kubenswrapper[5010]: E0203 10:41:41.503451 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:41:55 crc kubenswrapper[5010]: I0203 10:41:55.504336 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:41:55 crc kubenswrapper[5010]: E0203 10:41:55.505235 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:07 crc kubenswrapper[5010]: I0203 10:42:07.502435 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:07 crc kubenswrapper[5010]: E0203 10:42:07.503311 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:18 crc kubenswrapper[5010]: I0203 10:42:18.502975 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:18 crc kubenswrapper[5010]: E0203 10:42:18.504064 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:33 crc kubenswrapper[5010]: I0203 10:42:33.503350 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:33 crc kubenswrapper[5010]: E0203 10:42:33.504246 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:44 crc kubenswrapper[5010]: I0203 10:42:44.503916 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:44 crc kubenswrapper[5010]: E0203 10:42:44.505571 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:42:57 crc kubenswrapper[5010]: I0203 10:42:57.503013 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:42:57 crc kubenswrapper[5010]: E0203 10:42:57.504290 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:08 crc kubenswrapper[5010]: I0203 10:43:08.502944 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:08 crc kubenswrapper[5010]: E0203 10:43:08.505729 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:22 crc kubenswrapper[5010]: I0203 10:43:22.503183 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:22 crc kubenswrapper[5010]: E0203 10:43:22.504478 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:36 crc kubenswrapper[5010]: I0203 10:43:36.502517 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:36 crc kubenswrapper[5010]: E0203 10:43:36.505165 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:43:47 crc kubenswrapper[5010]: I0203 10:43:47.502692 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:43:47 crc kubenswrapper[5010]: E0203 10:43:47.503771 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:01 crc kubenswrapper[5010]: I0203 10:44:01.502368 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:01 crc kubenswrapper[5010]: E0203 10:44:01.503665 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.009581 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.013810 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.055091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125166 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.125320 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227713 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.227843 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.228565 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.228862 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.258506 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"community-operators-ljhkd\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:03 crc kubenswrapper[5010]: I0203 10:44:03.358958 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.056786 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421084 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd" exitCode=0 Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421234 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd"} Feb 03 10:44:04 crc kubenswrapper[5010]: I0203 10:44:04.421569 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"ff8fcc1fa2ecc7eec7f5fd63831a577ce4a0643c9612428d734263739d579a21"} Feb 03 10:44:05 crc kubenswrapper[5010]: I0203 10:44:05.436096 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1"} Feb 03 10:44:06 crc kubenswrapper[5010]: I0203 10:44:06.450500 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1" exitCode=0 Feb 03 10:44:06 crc kubenswrapper[5010]: I0203 10:44:06.450636 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1"} Feb 03 10:44:08 crc kubenswrapper[5010]: I0203 10:44:08.472739 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerStarted","Data":"512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24"} Feb 03 10:44:08 crc kubenswrapper[5010]: I0203 10:44:08.494063 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljhkd" podStartSLOduration=3.6406730080000003 podStartE2EDuration="6.494043586s" podCreationTimestamp="2026-02-03 10:44:02 +0000 UTC" firstStartedPulling="2026-02-03 10:44:04.423270652 +0000 UTC m=+2514.579246781" lastFinishedPulling="2026-02-03 10:44:07.27664123 +0000 UTC m=+2517.432617359" observedRunningTime="2026-02-03 10:44:08.491569595 +0000 UTC m=+2518.647545734" watchObservedRunningTime="2026-02-03 10:44:08.494043586 +0000 UTC m=+2518.650019715" Feb 03 10:44:12 crc kubenswrapper[5010]: I0203 10:44:12.949563 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:12 crc kubenswrapper[5010]: E0203 10:44:12.950464 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.359419 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.359467 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:13 crc kubenswrapper[5010]: I0203 10:44:13.412589 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:14 crc kubenswrapper[5010]: I0203 10:44:14.362644 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:14 crc kubenswrapper[5010]: I0203 10:44:14.432317 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:16 crc kubenswrapper[5010]: I0203 10:44:16.314276 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljhkd" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" containerID="cri-o://512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" gracePeriod=2 Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.328848 5010 generic.go:334] "Generic (PLEG): container finished" podID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerID="512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" exitCode=0 Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.328939 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24"} Feb 03 10:44:17 crc kubenswrapper[5010]: I0203 10:44:17.982828 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.053685 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.053810 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.054099 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") pod \"0d017619-3ae1-48aa-aff8-d66d1f176806\" (UID: \"0d017619-3ae1-48aa-aff8-d66d1f176806\") " Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.055190 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities" (OuterVolumeSpecName: "utilities") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.064308 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd" (OuterVolumeSpecName: "kube-api-access-cwsxd") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "kube-api-access-cwsxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.119488 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d017619-3ae1-48aa-aff8-d66d1f176806" (UID: "0d017619-3ae1-48aa-aff8-d66d1f176806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.156962 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.157014 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d017619-3ae1-48aa-aff8-d66d1f176806-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.157028 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsxd\" (UniqueName: \"kubernetes.io/projected/0d017619-3ae1-48aa-aff8-d66d1f176806-kube-api-access-cwsxd\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344150 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljhkd" event={"ID":"0d017619-3ae1-48aa-aff8-d66d1f176806","Type":"ContainerDied","Data":"ff8fcc1fa2ecc7eec7f5fd63831a577ce4a0643c9612428d734263739d579a21"} Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344255 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljhkd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.344264 5010 scope.go:117] "RemoveContainer" containerID="512a029216a528a2623119f8633f77e481b1b71064ff8ef79eee80c6c8d52d24" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.395296 5010 scope.go:117] "RemoveContainer" containerID="ef76ba3add7a763104f7648f68a14558e1c39fbfc1e2d61b5f71994e15a7a7d1" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.404678 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.416165 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljhkd"] Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.430208 5010 scope.go:117] "RemoveContainer" containerID="fab7f3bbda7f8de106f5a09ff1198783291792c10be97733b3f72a4e73a547fd" Feb 03 10:44:18 crc kubenswrapper[5010]: I0203 10:44:18.521339 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" path="/var/lib/kubelet/pods/0d017619-3ae1-48aa-aff8-d66d1f176806/volumes" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.236115 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.236989 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-utilities" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237008 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-utilities" Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.237023 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237031 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: E0203 10:44:20.237068 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-content" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237078 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="extract-content" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.237324 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d017619-3ae1-48aa-aff8-d66d1f176806" containerName="registry-server" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.239148 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.257630 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.307745 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.307825 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.308143 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411627 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.411710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.412524 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.412547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.437013 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"redhat-marketplace-hclqp\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:20 crc kubenswrapper[5010]: I0203 10:44:20.568894 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:21 crc kubenswrapper[5010]: I0203 10:44:21.227859 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:21 crc kubenswrapper[5010]: I0203 10:44:21.390661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerStarted","Data":"dc03929ced3815aaa6a44ceafc9ccce5fe2d5067d9e2ca6ab02e4bd24f776596"} Feb 03 10:44:22 crc kubenswrapper[5010]: I0203 10:44:22.402546 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" exitCode=0 Feb 03 10:44:22 crc kubenswrapper[5010]: I0203 10:44:22.402889 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a"} Feb 03 10:44:23 crc kubenswrapper[5010]: I0203 10:44:23.420637 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" exitCode=0 Feb 03 10:44:23 crc kubenswrapper[5010]: I0203 10:44:23.420732 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee"} Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.436429 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerStarted","Data":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.466096 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hclqp" podStartSLOduration=3.02439172 podStartE2EDuration="4.466068684s" podCreationTimestamp="2026-02-03 10:44:20 +0000 UTC" firstStartedPulling="2026-02-03 10:44:22.406496514 +0000 UTC m=+2532.562472643" lastFinishedPulling="2026-02-03 10:44:23.848173478 +0000 UTC m=+2534.004149607" observedRunningTime="2026-02-03 10:44:24.460806204 +0000 UTC m=+2534.616782343" watchObservedRunningTime="2026-02-03 10:44:24.466068684 +0000 UTC m=+2534.622044813" Feb 03 10:44:24 crc kubenswrapper[5010]: I0203 10:44:24.503436 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:24 crc kubenswrapper[5010]: E0203 10:44:24.503733 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.569571 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.571723 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:30 crc kubenswrapper[5010]: I0203 10:44:30.620327 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:31 crc kubenswrapper[5010]: I0203 10:44:31.575719 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:31 crc kubenswrapper[5010]: I0203 10:44:31.640259 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:33 crc kubenswrapper[5010]: I0203 10:44:33.541743 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hclqp" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" containerID="cri-o://1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" gracePeriod=2 Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.184973 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279068 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279254 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.279359 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") pod \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\" (UID: \"4b028f28-bcda-4f8c-9203-28d3ca53b83f\") " Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.280403 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities" (OuterVolumeSpecName: "utilities") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.289621 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w" (OuterVolumeSpecName: "kube-api-access-6lj7w") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "kube-api-access-6lj7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.310538 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b028f28-bcda-4f8c-9203-28d3ca53b83f" (UID: "4b028f28-bcda-4f8c-9203-28d3ca53b83f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382123 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382168 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b028f28-bcda-4f8c-9203-28d3ca53b83f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.382184 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lj7w\" (UniqueName: \"kubernetes.io/projected/4b028f28-bcda-4f8c-9203-28d3ca53b83f-kube-api-access-6lj7w\") on node \"crc\" DevicePath \"\"" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553469 5010 generic.go:334] "Generic (PLEG): container finished" podID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" exitCode=0 Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hclqp" event={"ID":"4b028f28-bcda-4f8c-9203-28d3ca53b83f","Type":"ContainerDied","Data":"dc03929ced3815aaa6a44ceafc9ccce5fe2d5067d9e2ca6ab02e4bd24f776596"} Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553917 5010 scope.go:117] "RemoveContainer" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.553616 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hclqp" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.580203 5010 scope.go:117] "RemoveContainer" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.598706 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.609522 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hclqp"] Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.619286 5010 scope.go:117] "RemoveContainer" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.656663 5010 scope.go:117] "RemoveContainer" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.657362 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": container with ID starting with 1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb not found: ID does not exist" containerID="1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657411 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb"} err="failed to get container status \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": rpc error: code = NotFound desc = could not find container \"1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb\": container with ID starting with 1ff878971849298eb6aef64e8f6f337659e4a4a4215bd1a1cf21d7ab0e4016bb not found: ID does not exist" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657442 5010 scope.go:117] "RemoveContainer" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.657751 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": container with ID starting with d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee not found: ID does not exist" containerID="d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657862 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee"} err="failed to get container status \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": rpc error: code = NotFound desc = could not find container \"d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee\": container with ID starting with d60c85535a2e54fde92aeedae00f9ef230eade1f3f31cd23645a983b4134a2ee not found: ID does not exist" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.657957 5010 scope.go:117] "RemoveContainer" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: E0203 10:44:34.658310 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": container with ID starting with ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a not found: ID does not exist" containerID="ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a" Feb 03 10:44:34 crc kubenswrapper[5010]: I0203 10:44:34.658415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a"} err="failed to get container status \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": rpc error: code = NotFound desc = could not find container \"ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a\": container with ID starting with ab457186781e2f3c1f15b4a02801684211ab2bfbee31f941c5d4642d2d943e0a not found: ID does not exist" Feb 03 10:44:35 crc kubenswrapper[5010]: I0203 10:44:35.503150 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:35 crc kubenswrapper[5010]: E0203 10:44:35.503939 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:44:36 crc kubenswrapper[5010]: I0203 10:44:36.515625 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" path="/var/lib/kubelet/pods/4b028f28-bcda-4f8c-9203-28d3ca53b83f/volumes" Feb 03 10:44:47 crc kubenswrapper[5010]: I0203 10:44:47.502711 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:44:48 crc kubenswrapper[5010]: I0203 10:44:48.703178 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.170321 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172265 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-content" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172286 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-content" Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172327 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-utilities" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172337 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="extract-utilities" Feb 03 10:45:00 crc kubenswrapper[5010]: E0203 10:45:00.172371 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172398 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.172687 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b028f28-bcda-4f8c-9203-28d3ca53b83f" containerName="registry-server" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.174003 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.176657 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.178684 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.190997 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.201966 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.202050 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.202087 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.304533 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.304609 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.306115 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.306776 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.316094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.327564 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"collect-profiles-29501925-nmkzb\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:00 crc kubenswrapper[5010]: I0203 10:45:00.509740 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.019394 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb"] Feb 03 10:45:01 crc kubenswrapper[5010]: W0203 10:45:01.023809 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f097429_a5b4_4a4a_8b81_6194870abf2e.slice/crio-7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee WatchSource:0}: Error finding container 7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee: Status 404 returned error can't find the container with id 7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856441 5010 generic.go:334] "Generic (PLEG): container finished" podID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerID="7d241b2d31d82749007029bfa402aa0fd6743ec37cf714478cf0ae1697c8b93d" exitCode=0 Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856559 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerDied","Data":"7d241b2d31d82749007029bfa402aa0fd6743ec37cf714478cf0ae1697c8b93d"} Feb 03 10:45:01 crc kubenswrapper[5010]: I0203 10:45:01.856879 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerStarted","Data":"7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee"} Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.314251 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406165 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406502 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406598 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") pod \"8f097429-a5b4-4a4a-8b81-6194870abf2e\" (UID: \"8f097429-a5b4-4a4a-8b81-6194870abf2e\") " Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.406976 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.407505 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f097429-a5b4-4a4a-8b81-6194870abf2e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.414286 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.416582 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq" (OuterVolumeSpecName: "kube-api-access-hdcrq") pod "8f097429-a5b4-4a4a-8b81-6194870abf2e" (UID: "8f097429-a5b4-4a4a-8b81-6194870abf2e"). InnerVolumeSpecName "kube-api-access-hdcrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.508566 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdcrq\" (UniqueName: \"kubernetes.io/projected/8f097429-a5b4-4a4a-8b81-6194870abf2e-kube-api-access-hdcrq\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.508606 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f097429-a5b4-4a4a-8b81-6194870abf2e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" event={"ID":"8f097429-a5b4-4a4a-8b81-6194870abf2e","Type":"ContainerDied","Data":"7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee"} Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879613 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb7db4695300eb847dfc5ba9e2d7a41baea67d3357353b3d4f124680a6934ee" Feb 03 10:45:03 crc kubenswrapper[5010]: I0203 10:45:03.879264 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501925-nmkzb" Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.455283 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.465103 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501880-x6pjp"] Feb 03 10:45:04 crc kubenswrapper[5010]: I0203 10:45:04.513571 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9c4aab-790c-4581-bfc2-ad1d7302c704" path="/var/lib/kubelet/pods/9b9c4aab-790c-4581-bfc2-ad1d7302c704/volumes" Feb 03 10:45:15 crc kubenswrapper[5010]: I0203 10:45:15.000703 5010 generic.go:334] "Generic (PLEG): container finished" podID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerID="dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47" exitCode=0 Feb 03 10:45:15 crc kubenswrapper[5010]: I0203 10:45:15.000803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerDied","Data":"dc60d854ffb0ca1de8c7268f0cc8371c9a244cdbcc3aab97ecb9ef8424edbc47"} Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.618158 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767070 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767242 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767306 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767469 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.767503 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") pod \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\" (UID: \"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d\") " Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.775330 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.775458 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn" (OuterVolumeSpecName: "kube-api-access-p5fnn") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "kube-api-access-p5fnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.802767 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.813117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.821519 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory" (OuterVolumeSpecName: "inventory") pod "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" (UID: "5b7ff70c-1251-4fd5-a71c-bf6703bcc85d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870628 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870692 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5fnn\" (UniqueName: \"kubernetes.io/projected/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-kube-api-access-p5fnn\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870705 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870715 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:16 crc kubenswrapper[5010]: I0203 10:45:16.870732 5010 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5b7ff70c-1251-4fd5-a71c-bf6703bcc85d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025409 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" event={"ID":"5b7ff70c-1251-4fd5-a71c-bf6703bcc85d","Type":"ContainerDied","Data":"27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba"} Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025843 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b2e3f9236cd72b126e3e7945fd42412d1ecde36745e5349c8e93bb4dc3e0ba" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.025677 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.145537 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:17 crc kubenswrapper[5010]: E0203 10:45:17.146004 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146026 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: E0203 10:45:17.146064 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146080 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146434 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7ff70c-1251-4fd5-a71c-bf6703bcc85d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.146466 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f097429-a5b4-4a4a-8b81-6194870abf2e" containerName="collect-profiles" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.147368 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149661 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149812 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.149976 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151200 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151397 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.151441 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.152441 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.164291 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284791 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284920 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284956 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.284985 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285153 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285229 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285286 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285385 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.285618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387615 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387679 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387746 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387774 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387837 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.387873 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.388383 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.388933 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.389270 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.392910 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.394940 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.395080 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.395237 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.397490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.403064 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.404015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.408949 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bq7n5\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:17 crc kubenswrapper[5010]: I0203 10:45:17.471533 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:45:18 crc kubenswrapper[5010]: I0203 10:45:18.036085 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5"] Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.050710 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerStarted","Data":"b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118"} Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.051364 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerStarted","Data":"3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53"} Feb 03 10:45:19 crc kubenswrapper[5010]: I0203 10:45:19.078531 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" podStartSLOduration=1.617542332 podStartE2EDuration="2.078498475s" podCreationTimestamp="2026-02-03 10:45:17 +0000 UTC" firstStartedPulling="2026-02-03 10:45:18.044249998 +0000 UTC m=+2588.200226127" lastFinishedPulling="2026-02-03 10:45:18.505206141 +0000 UTC m=+2588.661182270" observedRunningTime="2026-02-03 10:45:19.075299896 +0000 UTC m=+2589.231276035" watchObservedRunningTime="2026-02-03 10:45:19.078498475 +0000 UTC m=+2589.234474614" Feb 03 10:45:22 crc kubenswrapper[5010]: I0203 10:45:22.682868 5010 scope.go:117] "RemoveContainer" containerID="15e10260ef913b6b44e27ef0b7816cd144403f167a0779e8880ec7a69901a07c" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.808979 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.811871 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.838142 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889623 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889702 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.889807 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991246 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991722 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.991844 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.992647 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:11 crc kubenswrapper[5010]: I0203 10:46:11.992710 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.016251 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"certified-operators-8mzgl\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.132764 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:12 crc kubenswrapper[5010]: I0203 10:46:12.659244 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586074 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" exitCode=0 Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586278 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85"} Feb 03 10:46:13 crc kubenswrapper[5010]: I0203 10:46:13.586464 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerStarted","Data":"863be305abcb465ebca5aea60206885e34000c983fd7bff7e9942058c39d5010"} Feb 03 10:46:15 crc kubenswrapper[5010]: I0203 10:46:15.609033 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" exitCode=0 Feb 03 10:46:15 crc kubenswrapper[5010]: I0203 10:46:15.609155 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0"} Feb 03 10:46:16 crc kubenswrapper[5010]: I0203 10:46:16.633249 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerStarted","Data":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} Feb 03 10:46:16 crc kubenswrapper[5010]: I0203 10:46:16.705543 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8mzgl" podStartSLOduration=3.251091633 podStartE2EDuration="5.705511981s" podCreationTimestamp="2026-02-03 10:46:11 +0000 UTC" firstStartedPulling="2026-02-03 10:46:13.589176573 +0000 UTC m=+2643.745152702" lastFinishedPulling="2026-02-03 10:46:16.043596921 +0000 UTC m=+2646.199573050" observedRunningTime="2026-02-03 10:46:16.698612024 +0000 UTC m=+2646.854588163" watchObservedRunningTime="2026-02-03 10:46:16.705511981 +0000 UTC m=+2646.861488120" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.134339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.134987 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.189039 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.745003 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:22 crc kubenswrapper[5010]: I0203 10:46:22.804036 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:24 crc kubenswrapper[5010]: I0203 10:46:24.716916 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8mzgl" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" containerID="cri-o://5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" gracePeriod=2 Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.171960 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.316632 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.317037 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.317331 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") pod \"f79efd93-79ed-4459-9345-c203dd95ce20\" (UID: \"f79efd93-79ed-4459-9345-c203dd95ce20\") " Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.318509 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities" (OuterVolumeSpecName: "utilities") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.325498 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv" (OuterVolumeSpecName: "kube-api-access-9pcnv") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "kube-api-access-9pcnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.419790 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.421367 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pcnv\" (UniqueName: \"kubernetes.io/projected/f79efd93-79ed-4459-9345-c203dd95ce20-kube-api-access-9pcnv\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.661591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79efd93-79ed-4459-9345-c203dd95ce20" (UID: "f79efd93-79ed-4459-9345-c203dd95ce20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.727408 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79efd93-79ed-4459-9345-c203dd95ce20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733529 5010 generic.go:334] "Generic (PLEG): container finished" podID="f79efd93-79ed-4459-9345-c203dd95ce20" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" exitCode=0 Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733587 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733599 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8mzgl" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733623 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8mzgl" event={"ID":"f79efd93-79ed-4459-9345-c203dd95ce20","Type":"ContainerDied","Data":"863be305abcb465ebca5aea60206885e34000c983fd7bff7e9942058c39d5010"} Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.733649 5010 scope.go:117] "RemoveContainer" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.775533 5010 scope.go:117] "RemoveContainer" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.788766 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.802348 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8mzgl"] Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.805754 5010 scope.go:117] "RemoveContainer" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.848084 5010 scope.go:117] "RemoveContainer" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.848848 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": container with ID starting with 5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468 not found: ID does not exist" containerID="5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.848943 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468"} err="failed to get container status \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": rpc error: code = NotFound desc = could not find container \"5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468\": container with ID starting with 5b6bb16aa0b9be2a80ab460233826f9cbda4fd85680e75efbc0370d0c1738468 not found: ID does not exist" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849025 5010 scope.go:117] "RemoveContainer" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.849484 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": container with ID starting with 98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0 not found: ID does not exist" containerID="98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849515 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0"} err="failed to get container status \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": rpc error: code = NotFound desc = could not find container \"98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0\": container with ID starting with 98045c0699142a35fd580ee03bbd2be538447b9d3d6388b6e76f2677074cfdb0 not found: ID does not exist" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849539 5010 scope.go:117] "RemoveContainer" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: E0203 10:46:25.849829 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": container with ID starting with c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85 not found: ID does not exist" containerID="c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85" Feb 03 10:46:25 crc kubenswrapper[5010]: I0203 10:46:25.849867 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85"} err="failed to get container status \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": rpc error: code = NotFound desc = could not find container \"c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85\": container with ID starting with c562ce2f172268f30d32a1149d741246ef07fbb3b595aefe0237a71dafd6fb85 not found: ID does not exist" Feb 03 10:46:26 crc kubenswrapper[5010]: I0203 10:46:26.515179 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" path="/var/lib/kubelet/pods/f79efd93-79ed-4459-9345-c203dd95ce20/volumes" Feb 03 10:47:16 crc kubenswrapper[5010]: I0203 10:47:16.392241 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:47:16 crc kubenswrapper[5010]: I0203 10:47:16.393393 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:47:25 crc kubenswrapper[5010]: I0203 10:47:25.334879 5010 generic.go:334] "Generic (PLEG): container finished" podID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerID="b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118" exitCode=0 Feb 03 10:47:25 crc kubenswrapper[5010]: I0203 10:47:25.335074 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerDied","Data":"b92d5a51c76184465825f539bc982313c7d3a25990aaa74ff31547c87be3d118"} Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.808133 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.877509 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.877678 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878409 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878479 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878527 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.878593 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879442 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879880 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.879920 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") pod \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\" (UID: \"6fd37dcf-e81a-491a-a5e1-01a27517d1b4\") " Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.892335 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6" (OuterVolumeSpecName: "kube-api-access-wb2f6") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "kube-api-access-wb2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.893007 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.908072 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.927760 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.929430 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory" (OuterVolumeSpecName: "inventory") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.936798 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.949882 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.952933 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.962626 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6fd37dcf-e81a-491a-a5e1-01a27517d1b4" (UID: "6fd37dcf-e81a-491a-a5e1-01a27517d1b4"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982726 5010 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982780 5010 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982790 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2f6\" (UniqueName: \"kubernetes.io/projected/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-kube-api-access-wb2f6\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982803 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982817 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982831 5010 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982843 5010 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982855 5010 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:26 crc kubenswrapper[5010]: I0203 10:47:26.982869 5010 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd37dcf-e81a-491a-a5e1-01a27517d1b4-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358090 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" event={"ID":"6fd37dcf-e81a-491a-a5e1-01a27517d1b4","Type":"ContainerDied","Data":"3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53"} Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358785 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bdabc9e7c7a1e119a5dd6eb67d8df00ac4cf05c96ad5b5ff0ff7555b937fc53" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.358189 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bq7n5" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.477699 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478570 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-content" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478669 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-content" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478756 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478823 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.478888 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-utilities" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.478954 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="extract-utilities" Feb 03 10:47:27 crc kubenswrapper[5010]: E0203 10:47:27.479011 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479064 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479351 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd37dcf-e81a-491a-a5e1-01a27517d1b4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.479444 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79efd93-79ed-4459-9345-c203dd95ce20" containerName="registry-server" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.480317 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.483772 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dfmlj" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484179 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484328 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.484275 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.487255 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.493418 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620647 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620733 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620855 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620897 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.620973 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.621036 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.621093 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723791 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723894 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.723975 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724034 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724130 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724192 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.724267 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729018 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729038 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.729547 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.730094 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.728958 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.743743 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.743766 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:27 crc kubenswrapper[5010]: I0203 10:47:27.830035 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.239418 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h"] Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.251707 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:47:28 crc kubenswrapper[5010]: I0203 10:47:28.370086 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerStarted","Data":"91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e"} Feb 03 10:47:29 crc kubenswrapper[5010]: I0203 10:47:29.383701 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerStarted","Data":"b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6"} Feb 03 10:47:29 crc kubenswrapper[5010]: I0203 10:47:29.417943 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" podStartSLOduration=1.888472446 podStartE2EDuration="2.417913909s" podCreationTimestamp="2026-02-03 10:47:27 +0000 UTC" firstStartedPulling="2026-02-03 10:47:28.251403472 +0000 UTC m=+2718.407379601" lastFinishedPulling="2026-02-03 10:47:28.780844935 +0000 UTC m=+2718.936821064" observedRunningTime="2026-02-03 10:47:29.413547108 +0000 UTC m=+2719.569523237" watchObservedRunningTime="2026-02-03 10:47:29.417913909 +0000 UTC m=+2719.573890038" Feb 03 10:47:46 crc kubenswrapper[5010]: I0203 10:47:46.393379 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:47:46 crc kubenswrapper[5010]: I0203 10:47:46.394026 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.389925 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.390664 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.390745 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.391396 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.391450 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" gracePeriod=600 Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925062 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" exitCode=0 Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925165 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261"} Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925513 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} Feb 03 10:48:16 crc kubenswrapper[5010]: I0203 10:48:16.925550 5010 scope.go:117] "RemoveContainer" containerID="1d10eae99240283d55b9c85deaf52d7ded2dfa620944a687fc72bfe75b968fca" Feb 03 10:49:42 crc kubenswrapper[5010]: I0203 10:49:42.863890 5010 generic.go:334] "Generic (PLEG): container finished" podID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerID="b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6" exitCode=0 Feb 03 10:49:42 crc kubenswrapper[5010]: I0203 10:49:42.863972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerDied","Data":"b4880425775fd70bc813079913bbf8f5c4f8f371571355c4da87c44a571b62e6"} Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.306854 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407532 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407874 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.407965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408059 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408119 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408350 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.408390 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") pod \"7353ead1-b7ae-446c-a262-5a383b1d7e52\" (UID: \"7353ead1-b7ae-446c-a262-5a383b1d7e52\") " Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.414713 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.415525 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4" (OuterVolumeSpecName: "kube-api-access-727r4") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "kube-api-access-727r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.446996 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.449705 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.452404 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.462194 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory" (OuterVolumeSpecName: "inventory") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.478880 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7353ead1-b7ae-446c-a262-5a383b1d7e52" (UID: "7353ead1-b7ae-446c-a262-5a383b1d7e52"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511194 5010 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511247 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511264 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511276 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511289 5010 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511299 5010 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7353ead1-b7ae-446c-a262-5a383b1d7e52-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:44 crc kubenswrapper[5010]: I0203 10:49:44.511316 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-727r4\" (UniqueName: \"kubernetes.io/projected/7353ead1-b7ae-446c-a262-5a383b1d7e52-kube-api-access-727r4\") on node \"crc\" DevicePath \"\"" Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.016886 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" event={"ID":"7353ead1-b7ae-446c-a262-5a383b1d7e52","Type":"ContainerDied","Data":"91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e"} Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.016945 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d0640bf20723aa34494df221748d24f3bd4a04ce7159801cea99aea978bc5e" Feb 03 10:49:45 crc kubenswrapper[5010]: I0203 10:49:45.017038 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.516155 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:06 crc kubenswrapper[5010]: E0203 10:50:06.517169 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.517188 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.517869 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="7353ead1-b7ae-446c-a262-5a383b1d7e52" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.519673 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.520240 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.625926 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.626071 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.626120 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728481 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.728583 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.729180 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.729350 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.751068 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"redhat-operators-qddhr\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:06 crc kubenswrapper[5010]: I0203 10:50:06.847363 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:07 crc kubenswrapper[5010]: I0203 10:50:07.377429 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:07 crc kubenswrapper[5010]: W0203 10:50:07.381538 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050580f3_ed5d_45ed_9fd8_f1c04801481e.slice/crio-9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807 WatchSource:0}: Error finding container 9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807: Status 404 returned error can't find the container with id 9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807 Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250067 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" exitCode=0 Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250130 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4"} Feb 03 10:50:08 crc kubenswrapper[5010]: I0203 10:50:08.250497 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807"} Feb 03 10:50:10 crc kubenswrapper[5010]: I0203 10:50:10.276503 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} Feb 03 10:50:12 crc kubenswrapper[5010]: I0203 10:50:12.303915 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" exitCode=0 Feb 03 10:50:12 crc kubenswrapper[5010]: I0203 10:50:12.304035 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} Feb 03 10:50:13 crc kubenswrapper[5010]: I0203 10:50:13.318489 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerStarted","Data":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} Feb 03 10:50:13 crc kubenswrapper[5010]: I0203 10:50:13.351872 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qddhr" podStartSLOduration=2.502901355 podStartE2EDuration="7.35184595s" podCreationTimestamp="2026-02-03 10:50:06 +0000 UTC" firstStartedPulling="2026-02-03 10:50:08.254910038 +0000 UTC m=+2878.410886167" lastFinishedPulling="2026-02-03 10:50:13.103854633 +0000 UTC m=+2883.259830762" observedRunningTime="2026-02-03 10:50:13.346924684 +0000 UTC m=+2883.502900813" watchObservedRunningTime="2026-02-03 10:50:13.35184595 +0000 UTC m=+2883.507822079" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.390342 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.391155 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.847979 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:16 crc kubenswrapper[5010]: I0203 10:50:16.848401 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:17 crc kubenswrapper[5010]: I0203 10:50:17.908973 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" probeResult="failure" output=< Feb 03 10:50:17 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:50:17 crc kubenswrapper[5010]: > Feb 03 10:50:27 crc kubenswrapper[5010]: I0203 10:50:27.899529 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" probeResult="failure" output=< Feb 03 10:50:27 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 10:50:27 crc kubenswrapper[5010]: > Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.393319 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.395522 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.398380 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.400479 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.401132 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.402924 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sbxfw" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.409763 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449310 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449370 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.449560 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551856 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551923 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.551965 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552006 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552068 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552386 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552589 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552634 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.552731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.553291 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.554255 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.560194 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.655695 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656308 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656365 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656429 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656556 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.656706 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.657273 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.657863 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.658144 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.662320 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.666996 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.679851 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.689000 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " pod="openstack/tempest-tests-tempest" Feb 03 10:50:30 crc kubenswrapper[5010]: I0203 10:50:30.716585 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 10:50:31 crc kubenswrapper[5010]: I0203 10:50:31.235903 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 10:50:31 crc kubenswrapper[5010]: I0203 10:50:31.569571 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerStarted","Data":"08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa"} Feb 03 10:50:36 crc kubenswrapper[5010]: I0203 10:50:36.905940 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:36 crc kubenswrapper[5010]: I0203 10:50:36.962930 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:37 crc kubenswrapper[5010]: I0203 10:50:37.711845 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:38 crc kubenswrapper[5010]: I0203 10:50:38.704181 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qddhr" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" containerID="cri-o://85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" gracePeriod=2 Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.273638 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379394 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379768 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.379818 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") pod \"050580f3-ed5d-45ed-9fd8-f1c04801481e\" (UID: \"050580f3-ed5d-45ed-9fd8-f1c04801481e\") " Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.380658 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities" (OuterVolumeSpecName: "utilities") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.389714 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d" (OuterVolumeSpecName: "kube-api-access-vcw2d") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "kube-api-access-vcw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.486759 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.486807 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcw2d\" (UniqueName: \"kubernetes.io/projected/050580f3-ed5d-45ed-9fd8-f1c04801481e-kube-api-access-vcw2d\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.509564 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "050580f3-ed5d-45ed-9fd8-f1c04801481e" (UID: "050580f3-ed5d-45ed-9fd8-f1c04801481e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.589006 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/050580f3-ed5d-45ed-9fd8-f1c04801481e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720101 5010 generic.go:334] "Generic (PLEG): container finished" podID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" exitCode=0 Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720186 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720256 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qddhr" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720284 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qddhr" event={"ID":"050580f3-ed5d-45ed-9fd8-f1c04801481e","Type":"ContainerDied","Data":"9173026a9319e270a4f767d7eb35ba42d9773d7168b4d9cb6580511e85f53807"} Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.720331 5010 scope.go:117] "RemoveContainer" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.749918 5010 scope.go:117] "RemoveContainer" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.776252 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.787371 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qddhr"] Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.803996 5010 scope.go:117] "RemoveContainer" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.846182 5010 scope.go:117] "RemoveContainer" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.847302 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": container with ID starting with 85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea not found: ID does not exist" containerID="85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847367 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea"} err="failed to get container status \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": rpc error: code = NotFound desc = could not find container \"85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea\": container with ID starting with 85860b503b9ba7598fb39790610446c469b2c9d3be36e384fd73332efea178ea not found: ID does not exist" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847411 5010 scope.go:117] "RemoveContainer" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.847834 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": container with ID starting with 8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0 not found: ID does not exist" containerID="8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847859 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0"} err="failed to get container status \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": rpc error: code = NotFound desc = could not find container \"8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0\": container with ID starting with 8c17376d09d0f29ad79ff99c0b119376ff3c9c02f6cf9abfed976773c74141b0 not found: ID does not exist" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.847874 5010 scope.go:117] "RemoveContainer" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: E0203 10:50:39.848375 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": container with ID starting with f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4 not found: ID does not exist" containerID="f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4" Feb 03 10:50:39 crc kubenswrapper[5010]: I0203 10:50:39.848415 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4"} err="failed to get container status \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": rpc error: code = NotFound desc = could not find container \"f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4\": container with ID starting with f176807a926ead8616a6d27a2397327c698399d305df569130059169507178c4 not found: ID does not exist" Feb 03 10:50:40 crc kubenswrapper[5010]: I0203 10:50:40.530639 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" path="/var/lib/kubelet/pods/050580f3-ed5d-45ed-9fd8-f1c04801481e/volumes" Feb 03 10:50:46 crc kubenswrapper[5010]: I0203 10:50:46.390467 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:50:46 crc kubenswrapper[5010]: I0203 10:50:46.391184 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.449847 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.452800 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45sks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(8c8d92ab-5652-4bd9-81af-fd0be7aea36f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 10:51:06 crc kubenswrapper[5010]: E0203 10:51:06.454130 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" Feb 03 10:51:07 crc kubenswrapper[5010]: E0203 10:51:07.044124 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.390508 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.392893 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.393078 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.394166 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:51:16 crc kubenswrapper[5010]: I0203 10:51:16.394352 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" gracePeriod=600 Feb 03 10:51:16 crc kubenswrapper[5010]: E0203 10:51:16.534902 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153176 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" exitCode=0 Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963"} Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.153367 5010 scope.go:117] "RemoveContainer" containerID="b61671ae7473626ed1f7e8bbc62ee5800e0d1f9237e36316dd37140b902ac261" Feb 03 10:51:17 crc kubenswrapper[5010]: I0203 10:51:17.154583 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:17 crc kubenswrapper[5010]: E0203 10:51:17.155084 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:18 crc kubenswrapper[5010]: I0203 10:51:18.967156 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 10:51:20 crc kubenswrapper[5010]: I0203 10:51:20.191338 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerStarted","Data":"1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747"} Feb 03 10:51:20 crc kubenswrapper[5010]: I0203 10:51:20.221023 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.503850372 podStartE2EDuration="51.22099766s" podCreationTimestamp="2026-02-03 10:50:29 +0000 UTC" firstStartedPulling="2026-02-03 10:50:31.245966243 +0000 UTC m=+2901.401942372" lastFinishedPulling="2026-02-03 10:51:18.963113541 +0000 UTC m=+2949.119089660" observedRunningTime="2026-02-03 10:51:20.213581351 +0000 UTC m=+2950.369557480" watchObservedRunningTime="2026-02-03 10:51:20.22099766 +0000 UTC m=+2950.376973789" Feb 03 10:51:29 crc kubenswrapper[5010]: I0203 10:51:29.503632 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:29 crc kubenswrapper[5010]: E0203 10:51:29.504706 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:43 crc kubenswrapper[5010]: I0203 10:51:43.503080 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:43 crc kubenswrapper[5010]: E0203 10:51:43.504164 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:51:58 crc kubenswrapper[5010]: I0203 10:51:58.512450 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:51:58 crc kubenswrapper[5010]: E0203 10:51:58.515067 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:13 crc kubenswrapper[5010]: I0203 10:52:13.503036 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:13 crc kubenswrapper[5010]: E0203 10:52:13.504243 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:27 crc kubenswrapper[5010]: I0203 10:52:27.502755 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:27 crc kubenswrapper[5010]: E0203 10:52:27.503946 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:38 crc kubenswrapper[5010]: I0203 10:52:38.503857 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:38 crc kubenswrapper[5010]: E0203 10:52:38.505133 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:52:53 crc kubenswrapper[5010]: I0203 10:52:53.502785 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:52:53 crc kubenswrapper[5010]: E0203 10:52:53.503819 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:04 crc kubenswrapper[5010]: I0203 10:53:04.503502 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:04 crc kubenswrapper[5010]: E0203 10:53:04.504432 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:16 crc kubenswrapper[5010]: I0203 10:53:16.503108 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:16 crc kubenswrapper[5010]: E0203 10:53:16.505695 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:28 crc kubenswrapper[5010]: I0203 10:53:28.503319 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:28 crc kubenswrapper[5010]: E0203 10:53:28.504304 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:40 crc kubenswrapper[5010]: I0203 10:53:40.511031 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:40 crc kubenswrapper[5010]: E0203 10:53:40.512052 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:53:53 crc kubenswrapper[5010]: I0203 10:53:53.502610 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:53:53 crc kubenswrapper[5010]: E0203 10:53:53.503408 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:08 crc kubenswrapper[5010]: I0203 10:54:08.503193 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:08 crc kubenswrapper[5010]: E0203 10:54:08.504576 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.474998 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.476251 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-utilities" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.476272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-utilities" Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.479406 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.479454 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: E0203 10:54:17.479565 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-content" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.479572 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="extract-content" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.480048 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="050580f3-ed5d-45ed-9fd8-f1c04801481e" containerName="registry-server" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.481953 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.491054 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.553539 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.554551 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.554603 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657559 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657672 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657710 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.657782 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.658184 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.684755 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"community-operators-j5vwx\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:17 crc kubenswrapper[5010]: I0203 10:54:17.809155 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:18 crc kubenswrapper[5010]: I0203 10:54:18.478442 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451514 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" exitCode=0 Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451661 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8"} Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.451882 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"e5f41be29933987d2dba0464fb8639b118a5d77c9aa0f590b621c92b5c19e99e"} Feb 03 10:54:19 crc kubenswrapper[5010]: I0203 10:54:19.454361 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 10:54:20 crc kubenswrapper[5010]: I0203 10:54:20.475747 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} Feb 03 10:54:20 crc kubenswrapper[5010]: E0203 10:54:20.876952 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75610c94_1855_4f77_a701_8ef81b4d2e50.slice/crio-conmon-e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:54:21 crc kubenswrapper[5010]: I0203 10:54:21.487305 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" exitCode=0 Feb 03 10:54:21 crc kubenswrapper[5010]: I0203 10:54:21.487380 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} Feb 03 10:54:22 crc kubenswrapper[5010]: I0203 10:54:22.517040 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerStarted","Data":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} Feb 03 10:54:22 crc kubenswrapper[5010]: I0203 10:54:22.534790 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5vwx" podStartSLOduration=3.057911914 podStartE2EDuration="5.534762362s" podCreationTimestamp="2026-02-03 10:54:17 +0000 UTC" firstStartedPulling="2026-02-03 10:54:19.453999524 +0000 UTC m=+3129.609975663" lastFinishedPulling="2026-02-03 10:54:21.930849982 +0000 UTC m=+3132.086826111" observedRunningTime="2026-02-03 10:54:22.531154309 +0000 UTC m=+3132.687130448" watchObservedRunningTime="2026-02-03 10:54:22.534762362 +0000 UTC m=+3132.690738501" Feb 03 10:54:23 crc kubenswrapper[5010]: I0203 10:54:23.502809 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:23 crc kubenswrapper[5010]: E0203 10:54:23.503424 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.809603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.810303 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:27 crc kubenswrapper[5010]: I0203 10:54:27.863362 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:28 crc kubenswrapper[5010]: I0203 10:54:28.915671 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:28 crc kubenswrapper[5010]: I0203 10:54:28.976425 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:30 crc kubenswrapper[5010]: I0203 10:54:30.881075 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j5vwx" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" containerID="cri-o://301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" gracePeriod=2 Feb 03 10:54:31 crc kubenswrapper[5010]: E0203 10:54:31.152827 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75610c94_1855_4f77_a701_8ef81b4d2e50.slice/crio-conmon-301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905.scope\": RecentStats: unable to find data in memory cache]" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.579761 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.744795 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.745012 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.745109 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") pod \"75610c94-1855-4f77-a701-8ef81b4d2e50\" (UID: \"75610c94-1855-4f77-a701-8ef81b4d2e50\") " Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.748571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities" (OuterVolumeSpecName: "utilities") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.754343 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x" (OuterVolumeSpecName: "kube-api-access-gfx4x") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "kube-api-access-gfx4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.808325 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75610c94-1855-4f77-a701-8ef81b4d2e50" (UID: "75610c94-1855-4f77-a701-8ef81b4d2e50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.847935 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.847992 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfx4x\" (UniqueName: \"kubernetes.io/projected/75610c94-1855-4f77-a701-8ef81b4d2e50-kube-api-access-gfx4x\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.848013 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75610c94-1855-4f77-a701-8ef81b4d2e50-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897683 5010 generic.go:334] "Generic (PLEG): container finished" podID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" exitCode=0 Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897753 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897798 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5vwx" event={"ID":"75610c94-1855-4f77-a701-8ef81b4d2e50","Type":"ContainerDied","Data":"e5f41be29933987d2dba0464fb8639b118a5d77c9aa0f590b621c92b5c19e99e"} Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.897847 5010 scope.go:117] "RemoveContainer" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.898083 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5vwx" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.947199 5010 scope.go:117] "RemoveContainer" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.957691 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:31 crc kubenswrapper[5010]: I0203 10:54:31.971410 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j5vwx"] Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:31.999980 5010 scope.go:117] "RemoveContainer" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.046288 5010 scope.go:117] "RemoveContainer" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.047479 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": container with ID starting with 301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905 not found: ID does not exist" containerID="301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.047577 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905"} err="failed to get container status \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": rpc error: code = NotFound desc = could not find container \"301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905\": container with ID starting with 301b550f3a918d672dd26303ad4d034dc292a2b1496ea3af841e8801975ce905 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.047666 5010 scope.go:117] "RemoveContainer" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.048505 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": container with ID starting with e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0 not found: ID does not exist" containerID="e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.048575 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0"} err="failed to get container status \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": rpc error: code = NotFound desc = could not find container \"e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0\": container with ID starting with e980cb19cde53d530b885a57e43ecdd0970ea0ea02425b5436bbe03a053e20d0 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.048615 5010 scope.go:117] "RemoveContainer" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: E0203 10:54:32.049054 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": container with ID starting with 5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8 not found: ID does not exist" containerID="5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.049104 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8"} err="failed to get container status \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": rpc error: code = NotFound desc = could not find container \"5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8\": container with ID starting with 5a7f9c5e77983464234aae215b30f19eb88eb3fb62c5467f971421f2f81a7ab8 not found: ID does not exist" Feb 03 10:54:32 crc kubenswrapper[5010]: I0203 10:54:32.515081 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" path="/var/lib/kubelet/pods/75610c94-1855-4f77-a701-8ef81b4d2e50/volumes" Feb 03 10:54:37 crc kubenswrapper[5010]: I0203 10:54:37.502894 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:37 crc kubenswrapper[5010]: E0203 10:54:37.503781 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:54:50 crc kubenswrapper[5010]: I0203 10:54:50.508765 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:54:50 crc kubenswrapper[5010]: E0203 10:54:50.510103 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:01 crc kubenswrapper[5010]: I0203 10:55:01.502635 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:01 crc kubenswrapper[5010]: E0203 10:55:01.503573 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.696541 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697928 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.697947 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697976 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-content" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.697982 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-content" Feb 03 10:55:11 crc kubenswrapper[5010]: E0203 10:55:11.697997 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-utilities" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.698005 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="extract-utilities" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.698206 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="75610c94-1855-4f77-a701-8ef81b4d2e50" containerName="registry-server" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.700478 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.711330 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.851590 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.851794 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.852423 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954768 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954954 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.954981 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.955724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.955815 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:11 crc kubenswrapper[5010]: I0203 10:55:11.980621 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"redhat-marketplace-9f8sv\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:12 crc kubenswrapper[5010]: I0203 10:55:12.036281 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:12 crc kubenswrapper[5010]: I0203 10:55:12.692969 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432186 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" exitCode=0 Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432299 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9"} Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.432713 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"0aa8a178688868fad4a61fcd06e29546fa595b6c0d9f307f06ce2cf1da409bb6"} Feb 03 10:55:13 crc kubenswrapper[5010]: I0203 10:55:13.502925 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:13 crc kubenswrapper[5010]: E0203 10:55:13.503681 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:14 crc kubenswrapper[5010]: I0203 10:55:14.447742 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} Feb 03 10:55:15 crc kubenswrapper[5010]: I0203 10:55:15.732381 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" exitCode=0 Feb 03 10:55:15 crc kubenswrapper[5010]: I0203 10:55:15.732903 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} Feb 03 10:55:16 crc kubenswrapper[5010]: I0203 10:55:16.747572 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerStarted","Data":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} Feb 03 10:55:16 crc kubenswrapper[5010]: I0203 10:55:16.769864 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9f8sv" podStartSLOduration=3.015479514 podStartE2EDuration="5.7698334s" podCreationTimestamp="2026-02-03 10:55:11 +0000 UTC" firstStartedPulling="2026-02-03 10:55:13.43530073 +0000 UTC m=+3183.591276859" lastFinishedPulling="2026-02-03 10:55:16.189654616 +0000 UTC m=+3186.345630745" observedRunningTime="2026-02-03 10:55:16.766841504 +0000 UTC m=+3186.922817633" watchObservedRunningTime="2026-02-03 10:55:16.7698334 +0000 UTC m=+3186.925809529" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.307692 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.326726 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.618978 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:22 crc kubenswrapper[5010]: I0203 10:55:22.942937 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:23 crc kubenswrapper[5010]: I0203 10:55:23.006807 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:24 crc kubenswrapper[5010]: I0203 10:55:24.908278 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9f8sv" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" containerID="cri-o://502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" gracePeriod=2 Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.476194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491759 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491848 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.491885 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") pod \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\" (UID: \"a90875cc-2fcf-425f-b55f-f48f0d9a71a8\") " Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.494075 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities" (OuterVolumeSpecName: "utilities") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.507894 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:25 crc kubenswrapper[5010]: E0203 10:55:25.508178 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.515692 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt" (OuterVolumeSpecName: "kube-api-access-kscxt") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "kube-api-access-kscxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.533751 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a90875cc-2fcf-425f-b55f-f48f0d9a71a8" (UID: "a90875cc-2fcf-425f-b55f-f48f0d9a71a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.597985 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscxt\" (UniqueName: \"kubernetes.io/projected/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-kube-api-access-kscxt\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.598050 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.598065 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90875cc-2fcf-425f-b55f-f48f0d9a71a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922291 5010 generic.go:334] "Generic (PLEG): container finished" podID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" exitCode=0 Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922361 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922405 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9f8sv" event={"ID":"a90875cc-2fcf-425f-b55f-f48f0d9a71a8","Type":"ContainerDied","Data":"0aa8a178688868fad4a61fcd06e29546fa595b6c0d9f307f06ce2cf1da409bb6"} Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922430 5010 scope.go:117] "RemoveContainer" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.922600 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9f8sv" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.974807 5010 scope.go:117] "RemoveContainer" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.981018 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:25 crc kubenswrapper[5010]: I0203 10:55:25.994842 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9f8sv"] Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.023863 5010 scope.go:117] "RemoveContainer" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.062964 5010 scope.go:117] "RemoveContainer" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.063689 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": container with ID starting with 502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad not found: ID does not exist" containerID="502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.063753 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad"} err="failed to get container status \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": rpc error: code = NotFound desc = could not find container \"502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad\": container with ID starting with 502abf10453ed4797235da1d74d9b3018b9a278da2729acff6ef1a1902545dad not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.063789 5010 scope.go:117] "RemoveContainer" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.064625 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": container with ID starting with 953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461 not found: ID does not exist" containerID="953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.064667 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461"} err="failed to get container status \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": rpc error: code = NotFound desc = could not find container \"953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461\": container with ID starting with 953dfccec4e14c654a4ca0cae9be28032c1d0cf3287d08f22124b1031c0b3461 not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.064713 5010 scope.go:117] "RemoveContainer" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: E0203 10:55:26.065915 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": container with ID starting with f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9 not found: ID does not exist" containerID="f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.065960 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9"} err="failed to get container status \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": rpc error: code = NotFound desc = could not find container \"f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9\": container with ID starting with f3dca40395832985fc2f0f733968b498192a7cbd17676209dbf42953808936c9 not found: ID does not exist" Feb 03 10:55:26 crc kubenswrapper[5010]: I0203 10:55:26.518851 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" path="/var/lib/kubelet/pods/a90875cc-2fcf-425f-b55f-f48f0d9a71a8/volumes" Feb 03 10:55:40 crc kubenswrapper[5010]: I0203 10:55:40.510319 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:40 crc kubenswrapper[5010]: E0203 10:55:40.511305 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:55:54 crc kubenswrapper[5010]: I0203 10:55:54.502832 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:55:54 crc kubenswrapper[5010]: E0203 10:55:54.503874 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:56:07 crc kubenswrapper[5010]: I0203 10:56:07.503004 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:56:07 crc kubenswrapper[5010]: E0203 10:56:07.504286 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 10:56:22 crc kubenswrapper[5010]: I0203 10:56:22.503329 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:56:23 crc kubenswrapper[5010]: I0203 10:56:23.544680 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.451639 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453010 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-utilities" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453036 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-utilities" Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-content" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453108 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="extract-content" Feb 03 10:57:02 crc kubenswrapper[5010]: E0203 10:57:02.453126 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453135 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.453456 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90875cc-2fcf-425f-b55f-f48f0d9a71a8" containerName="registry-server" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.455739 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.471557 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.553609 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.554015 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.554353 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656732 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656816 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.656989 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.657765 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.658088 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.703266 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"certified-operators-6xxhp\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:02 crc kubenswrapper[5010]: I0203 10:57:02.783038 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:03 crc kubenswrapper[5010]: I0203 10:57:03.401320 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069095 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" exitCode=0 Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069159 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68"} Feb 03 10:57:04 crc kubenswrapper[5010]: I0203 10:57:04.069204 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerStarted","Data":"a23ecdd349e9698971d6cc0130e941bb3dc5225a38df40061445a094d26767a2"} Feb 03 10:57:06 crc kubenswrapper[5010]: I0203 10:57:06.097189 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" exitCode=0 Feb 03 10:57:06 crc kubenswrapper[5010]: I0203 10:57:06.097268 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18"} Feb 03 10:57:07 crc kubenswrapper[5010]: I0203 10:57:07.112085 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerStarted","Data":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} Feb 03 10:57:07 crc kubenswrapper[5010]: I0203 10:57:07.139169 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xxhp" podStartSLOduration=2.658385303 podStartE2EDuration="5.13914006s" podCreationTimestamp="2026-02-03 10:57:02 +0000 UTC" firstStartedPulling="2026-02-03 10:57:04.073783557 +0000 UTC m=+3294.229759686" lastFinishedPulling="2026-02-03 10:57:06.554538314 +0000 UTC m=+3296.710514443" observedRunningTime="2026-02-03 10:57:07.134120212 +0000 UTC m=+3297.290096341" watchObservedRunningTime="2026-02-03 10:57:07.13914006 +0000 UTC m=+3297.295116189" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.784028 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.784955 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:12 crc kubenswrapper[5010]: I0203 10:57:12.840252 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:13 crc kubenswrapper[5010]: I0203 10:57:13.219970 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:13 crc kubenswrapper[5010]: I0203 10:57:13.281874 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.199937 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xxhp" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" containerID="cri-o://e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" gracePeriod=2 Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.713910 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804856 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804918 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.804984 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") pod \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\" (UID: \"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8\") " Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.807946 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities" (OuterVolumeSpecName: "utilities") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.814326 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc" (OuterVolumeSpecName: "kube-api-access-c6zrc") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "kube-api-access-c6zrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.867901 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" (UID: "c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907322 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907710 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zrc\" (UniqueName: \"kubernetes.io/projected/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-kube-api-access-c6zrc\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:15 crc kubenswrapper[5010]: I0203 10:57:15.907791 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.221182 5010 generic.go:334] "Generic (PLEG): container finished" podID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" exitCode=0 Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.221306 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xxhp" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222494 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222621 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xxhp" event={"ID":"c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8","Type":"ContainerDied","Data":"a23ecdd349e9698971d6cc0130e941bb3dc5225a38df40061445a094d26767a2"} Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.222724 5010 scope.go:117] "RemoveContainer" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.300417 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.329359 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xxhp"] Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.355547 5010 scope.go:117] "RemoveContainer" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.399238 5010 scope.go:117] "RemoveContainer" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.458375 5010 scope.go:117] "RemoveContainer" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.459898 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": container with ID starting with e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149 not found: ID does not exist" containerID="e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.459959 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149"} err="failed to get container status \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": rpc error: code = NotFound desc = could not find container \"e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149\": container with ID starting with e7c269ac15b387e1b9c08c4a6ef995894843a7f2bb01cbcb0277ba463d210149 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.459996 5010 scope.go:117] "RemoveContainer" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.460612 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": container with ID starting with ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18 not found: ID does not exist" containerID="ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.460665 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18"} err="failed to get container status \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": rpc error: code = NotFound desc = could not find container \"ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18\": container with ID starting with ed74b123d8869c4de05317cf924dfb73a4c070f0dc216d2e13a741f3378b5d18 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.460706 5010 scope.go:117] "RemoveContainer" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: E0203 10:57:16.462476 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": container with ID starting with 258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68 not found: ID does not exist" containerID="258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.462545 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68"} err="failed to get container status \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": rpc error: code = NotFound desc = could not find container \"258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68\": container with ID starting with 258262cb8d5c0b00f873f30a1ddc931ca92428b326f5eb4dee8490bfcfe07b68 not found: ID does not exist" Feb 03 10:57:16 crc kubenswrapper[5010]: I0203 10:57:16.515420 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" path="/var/lib/kubelet/pods/c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8/volumes" Feb 03 10:58:46 crc kubenswrapper[5010]: I0203 10:58:46.390736 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:58:46 crc kubenswrapper[5010]: I0203 10:58:46.391627 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:16 crc kubenswrapper[5010]: I0203 10:59:16.390354 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:59:16 crc kubenswrapper[5010]: I0203 10:59:16.390813 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.392283 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.393244 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.393313 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.394689 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.394835 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" gracePeriod=600 Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.865532 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" exitCode=0 Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.866063 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460"} Feb 03 10:59:46 crc kubenswrapper[5010]: I0203 10:59:46.866179 5010 scope.go:117] "RemoveContainer" containerID="e84a27d4cdf3f8017935aa65f3f9f5cfa1374eefde5ac3b3cb0a03e9b8257963" Feb 03 10:59:47 crc kubenswrapper[5010]: I0203 10:59:47.883290 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.156397 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157438 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-utilities" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157459 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-utilities" Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157488 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-content" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157498 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="extract-content" Feb 03 11:00:00 crc kubenswrapper[5010]: E0203 11:00:00.157517 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157526 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.157786 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f1fcce-f4ce-4ccb-bb80-c6594a7a05f8" containerName="registry-server" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.159061 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.162397 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.173577 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.177121 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256540 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256622 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.256731 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358519 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358657 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.358808 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.360749 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.368100 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.385732 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"collect-profiles-29501940-ph7b2\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:00 crc kubenswrapper[5010]: I0203 11:00:00.489305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:01 crc kubenswrapper[5010]: I0203 11:00:01.026028 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2"] Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.108111 5010 generic.go:334] "Generic (PLEG): container finished" podID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerID="33926290be86ca315743ea2dbeb58bb25d2755270bd9efcd12f13f2ea74329cd" exitCode=0 Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.108972 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerDied","Data":"33926290be86ca315743ea2dbeb58bb25d2755270bd9efcd12f13f2ea74329cd"} Feb 03 11:00:02 crc kubenswrapper[5010]: I0203 11:00:02.109010 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerStarted","Data":"54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c"} Feb 03 11:00:03 crc kubenswrapper[5010]: I0203 11:00:03.998873 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.134953 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" event={"ID":"b32288df-fb1b-4b63-b699-4eabdb2a0cea","Type":"ContainerDied","Data":"54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c"} Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.135019 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d395f87e6be00632a11eb7daac0e9668f4044743e9f913e46a8cde154d6a6c" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.135068 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501940-ph7b2" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.187537 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188247 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188425 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") pod \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\" (UID: \"b32288df-fb1b-4b63-b699-4eabdb2a0cea\") " Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.188631 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume" (OuterVolumeSpecName: "config-volume") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.189316 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32288df-fb1b-4b63-b699-4eabdb2a0cea-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.196985 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6" (OuterVolumeSpecName: "kube-api-access-mxps6") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "kube-api-access-mxps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.199660 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b32288df-fb1b-4b63-b699-4eabdb2a0cea" (UID: "b32288df-fb1b-4b63-b699-4eabdb2a0cea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.291342 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxps6\" (UniqueName: \"kubernetes.io/projected/b32288df-fb1b-4b63-b699-4eabdb2a0cea-kube-api-access-mxps6\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:04 crc kubenswrapper[5010]: I0203 11:00:04.291839 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b32288df-fb1b-4b63-b699-4eabdb2a0cea-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:00:05 crc kubenswrapper[5010]: I0203 11:00:05.098924 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 11:00:05 crc kubenswrapper[5010]: I0203 11:00:05.133902 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501895-dwjmz"] Feb 03 11:00:06 crc kubenswrapper[5010]: I0203 11:00:06.519352 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eae17d2-2362-4e78-908b-42fcb386ec60" path="/var/lib/kubelet/pods/0eae17d2-2362-4e78-908b-42fcb386ec60/volumes" Feb 03 11:00:23 crc kubenswrapper[5010]: I0203 11:00:23.699020 5010 scope.go:117] "RemoveContainer" containerID="73db75a439822b6dd55d522e4da89fbd20aa66ab67d412f72f9dfe07016f6245" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.329096 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:40 crc kubenswrapper[5010]: E0203 11:00:40.330617 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.330636 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.330858 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32288df-fb1b-4b63-b699-4eabdb2a0cea" containerName="collect-profiles" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.332817 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.352343 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453476 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453749 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.453929 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557155 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557618 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.557823 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.558277 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.558387 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.590028 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"redhat-operators-pn7mc\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:40 crc kubenswrapper[5010]: I0203 11:00:40.655771 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.165065 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.529977 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" exitCode=0 Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.530451 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a"} Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.530594 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"bd74c4623e52fe568fd7ff3a820e2825e2272286970c6342fa508c26eaf7252a"} Feb 03 11:00:41 crc kubenswrapper[5010]: I0203 11:00:41.532224 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:00:42 crc kubenswrapper[5010]: I0203 11:00:42.544491 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} Feb 03 11:00:45 crc kubenswrapper[5010]: I0203 11:00:45.585778 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" exitCode=0 Feb 03 11:00:45 crc kubenswrapper[5010]: I0203 11:00:45.586277 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} Feb 03 11:00:47 crc kubenswrapper[5010]: I0203 11:00:47.607974 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerStarted","Data":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} Feb 03 11:00:47 crc kubenswrapper[5010]: I0203 11:00:47.633084 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pn7mc" podStartSLOduration=2.69933617 podStartE2EDuration="7.633065291s" podCreationTimestamp="2026-02-03 11:00:40 +0000 UTC" firstStartedPulling="2026-02-03 11:00:41.531947027 +0000 UTC m=+3511.687923156" lastFinishedPulling="2026-02-03 11:00:46.465676148 +0000 UTC m=+3516.621652277" observedRunningTime="2026-02-03 11:00:47.626514847 +0000 UTC m=+3517.782490976" watchObservedRunningTime="2026-02-03 11:00:47.633065291 +0000 UTC m=+3517.789041410" Feb 03 11:00:50 crc kubenswrapper[5010]: I0203 11:00:50.656707 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:50 crc kubenswrapper[5010]: I0203 11:00:50.658200 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:00:51 crc kubenswrapper[5010]: I0203 11:00:51.714145 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pn7mc" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" probeResult="failure" output=< Feb 03 11:00:51 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:00:51 crc kubenswrapper[5010]: > Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.155288 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.157924 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.174069 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318450 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318585 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318618 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.318651 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420639 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420777 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420813 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.420853 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.427612 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.427813 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.429002 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.440375 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"keystone-cron-29501941-gv4sr\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.480237 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.718667 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.777113 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.966345 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:00 crc kubenswrapper[5010]: I0203 11:01:00.977241 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501941-gv4sr"] Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769024 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerStarted","Data":"02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4"} Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769408 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerStarted","Data":"06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f"} Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.769201 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pn7mc" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" containerID="cri-o://d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" gracePeriod=2 Feb 03 11:01:01 crc kubenswrapper[5010]: I0203 11:01:01.800042 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29501941-gv4sr" podStartSLOduration=1.8000245879999999 podStartE2EDuration="1.800024588s" podCreationTimestamp="2026-02-03 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:01:01.79772166 +0000 UTC m=+3531.953697789" watchObservedRunningTime="2026-02-03 11:01:01.800024588 +0000 UTC m=+3531.956000717" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.427446 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572390 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572600 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.572659 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") pod \"3b136e4b-d6df-4608-8e99-4d63efe1d513\" (UID: \"3b136e4b-d6df-4608-8e99-4d63efe1d513\") " Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.574958 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities" (OuterVolumeSpecName: "utilities") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.595664 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6" (OuterVolumeSpecName: "kube-api-access-6kmc6") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "kube-api-access-6kmc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.675512 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.675555 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kmc6\" (UniqueName: \"kubernetes.io/projected/3b136e4b-d6df-4608-8e99-4d63efe1d513-kube-api-access-6kmc6\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.743766 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b136e4b-d6df-4608-8e99-4d63efe1d513" (UID: "3b136e4b-d6df-4608-8e99-4d63efe1d513"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.778246 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b136e4b-d6df-4608-8e99-4d63efe1d513-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.780980 5010 generic.go:334] "Generic (PLEG): container finished" podID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" exitCode=0 Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781114 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn7mc" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781231 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781366 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn7mc" event={"ID":"3b136e4b-d6df-4608-8e99-4d63efe1d513","Type":"ContainerDied","Data":"bd74c4623e52fe568fd7ff3a820e2825e2272286970c6342fa508c26eaf7252a"} Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.781400 5010 scope.go:117] "RemoveContainer" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.832507 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.835592 5010 scope.go:117] "RemoveContainer" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.844587 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pn7mc"] Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.872465 5010 scope.go:117] "RemoveContainer" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.918590 5010 scope.go:117] "RemoveContainer" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.923994 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": container with ID starting with d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d not found: ID does not exist" containerID="d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924053 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d"} err="failed to get container status \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": rpc error: code = NotFound desc = could not find container \"d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d\": container with ID starting with d79a2764ab7402abbd6242fce8bbd6bb8df7f204ffb24015a22a0b5d7afd700d not found: ID does not exist" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924088 5010 scope.go:117] "RemoveContainer" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.924473 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": container with ID starting with 402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b not found: ID does not exist" containerID="402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924510 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b"} err="failed to get container status \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": rpc error: code = NotFound desc = could not find container \"402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b\": container with ID starting with 402bd9730a1a9d49f9ce6d70c4690569a37653003035d7e967a98cf100e3281b not found: ID does not exist" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.924533 5010 scope.go:117] "RemoveContainer" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: E0203 11:01:02.924969 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": container with ID starting with 5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a not found: ID does not exist" containerID="5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a" Feb 03 11:01:02 crc kubenswrapper[5010]: I0203 11:01:02.925000 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a"} err="failed to get container status \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": rpc error: code = NotFound desc = could not find container \"5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a\": container with ID starting with 5fbaf14c88cad66c19b95c7039865f8c906e97a861524971a4a4ca118714fc0a not found: ID does not exist" Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.514982 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" path="/var/lib/kubelet/pods/3b136e4b-d6df-4608-8e99-4d63efe1d513/volumes" Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.809512 5010 generic.go:334] "Generic (PLEG): container finished" podID="96c330a2-14f4-4923-8707-6b9cce98267f" containerID="02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4" exitCode=0 Feb 03 11:01:04 crc kubenswrapper[5010]: I0203 11:01:04.809573 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerDied","Data":"02224ab559c551eecf6a9d4b9738db9679403937e8a11a5ef3eb2f054b61b9f4"} Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.273790 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356592 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356668 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356773 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.356869 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") pod \"96c330a2-14f4-4923-8707-6b9cce98267f\" (UID: \"96c330a2-14f4-4923-8707-6b9cce98267f\") " Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.362683 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj" (OuterVolumeSpecName: "kube-api-access-zpcrj") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "kube-api-access-zpcrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.378336 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.386956 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.413094 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data" (OuterVolumeSpecName: "config-data") pod "96c330a2-14f4-4923-8707-6b9cce98267f" (UID: "96c330a2-14f4-4923-8707-6b9cce98267f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.461990 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462036 5010 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462053 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpcrj\" (UniqueName: \"kubernetes.io/projected/96c330a2-14f4-4923-8707-6b9cce98267f-kube-api-access-zpcrj\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.462065 5010 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96c330a2-14f4-4923-8707-6b9cce98267f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829545 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501941-gv4sr" event={"ID":"96c330a2-14f4-4923-8707-6b9cce98267f","Type":"ContainerDied","Data":"06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f"} Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829875 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06fb52ad183ab788fc0bbae5e208e4038eec5dd6e3afd34dc9e60c51a49cf92f" Feb 03 11:01:06 crc kubenswrapper[5010]: I0203 11:01:06.829602 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501941-gv4sr" Feb 03 11:01:46 crc kubenswrapper[5010]: I0203 11:01:46.390618 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:01:46 crc kubenswrapper[5010]: I0203 11:01:46.391727 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:16 crc kubenswrapper[5010]: I0203 11:02:16.389962 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:02:16 crc kubenswrapper[5010]: I0203 11:02:16.391018 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.389925 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.390501 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.390559 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.391559 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.391618 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" gracePeriod=600 Feb 03 11:02:46 crc kubenswrapper[5010]: E0203 11:02:46.527119 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956781 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" exitCode=0 Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956841 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af"} Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.956884 5010 scope.go:117] "RemoveContainer" containerID="954ea60c6e1c907175e18b080d65b7e14b322101b2585bb6251035ace6752460" Feb 03 11:02:46 crc kubenswrapper[5010]: I0203 11:02:46.957671 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:02:46 crc kubenswrapper[5010]: E0203 11:02:46.958022 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:01 crc kubenswrapper[5010]: I0203 11:03:01.502277 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:01 crc kubenswrapper[5010]: E0203 11:03:01.503097 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:14 crc kubenswrapper[5010]: I0203 11:03:14.502599 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:14 crc kubenswrapper[5010]: E0203 11:03:14.504816 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:26 crc kubenswrapper[5010]: I0203 11:03:26.503006 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:26 crc kubenswrapper[5010]: E0203 11:03:26.503751 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:38 crc kubenswrapper[5010]: I0203 11:03:38.503373 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:38 crc kubenswrapper[5010]: E0203 11:03:38.504573 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:03:49 crc kubenswrapper[5010]: I0203 11:03:49.502747 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:03:49 crc kubenswrapper[5010]: E0203 11:03:49.503472 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:01 crc kubenswrapper[5010]: I0203 11:04:01.502638 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:01 crc kubenswrapper[5010]: E0203 11:04:01.503911 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:12 crc kubenswrapper[5010]: I0203 11:04:12.503542 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:12 crc kubenswrapper[5010]: E0203 11:04:12.504459 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:17 crc kubenswrapper[5010]: I0203 11:04:17.869884 5010 generic.go:334] "Generic (PLEG): container finished" podID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerID="1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747" exitCode=0 Feb 03 11:04:17 crc kubenswrapper[5010]: I0203 11:04:17.869950 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerDied","Data":"1dceb12710efc42bf7d1bc8254652d746deec954467b49662ae6e52ac9ca2747"} Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.209435 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.263983 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264163 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264252 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264342 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264381 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264485 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264519 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264577 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.264628 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") pod \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\" (UID: \"8c8d92ab-5652-4bd9-81af-fd0be7aea36f\") " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.265673 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data" (OuterVolumeSpecName: "config-data") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.265998 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.270412 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.271098 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks" (OuterVolumeSpecName: "kube-api-access-45sks") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "kube-api-access-45sks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.273649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.294637 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.299367 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.300119 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.328801 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8c8d92ab-5652-4bd9-81af-fd0be7aea36f" (UID: "8c8d92ab-5652-4bd9-81af-fd0be7aea36f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367904 5010 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367944 5010 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367959 5010 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367978 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.367993 5010 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368047 5010 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368061 5010 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368073 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sks\" (UniqueName: \"kubernetes.io/projected/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-kube-api-access-45sks\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.368087 5010 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8c8d92ab-5652-4bd9-81af-fd0be7aea36f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.390910 5010 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.469950 5010 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896548 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"8c8d92ab-5652-4bd9-81af-fd0be7aea36f","Type":"ContainerDied","Data":"08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa"} Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896614 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d3852b3365aa6563a9026a76a312565c0566fd0792c861c656faa1a56176fa" Feb 03 11:04:19 crc kubenswrapper[5010]: I0203 11:04:19.896628 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.659918 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661097 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661119 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661134 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-content" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661142 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-content" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661158 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661169 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661200 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661208 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: E0203 11:04:24.661263 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-utilities" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661272 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="extract-utilities" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661501 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c330a2-14f4-4923-8707-6b9cce98267f" containerName="keystone-cron" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661521 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b136e4b-d6df-4608-8e99-4d63efe1d513" containerName="registry-server" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.661536 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8d92ab-5652-4bd9-81af-fd0be7aea36f" containerName="tempest-tests-tempest-tests-runner" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.662486 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.665880 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sbxfw" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.671786 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.793380 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.793528 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.895666 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.895822 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.896803 5010 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.929352 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqzv\" (UniqueName: \"kubernetes.io/projected/8dfa1254-0d2c-4885-a531-fc90541692e7-kube-api-access-2jqzv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.929906 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8dfa1254-0d2c-4885-a531-fc90541692e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:24 crc kubenswrapper[5010]: I0203 11:04:24.985699 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 11:04:25 crc kubenswrapper[5010]: I0203 11:04:25.543935 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 11:04:25 crc kubenswrapper[5010]: I0203 11:04:25.965564 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8dfa1254-0d2c-4885-a531-fc90541692e7","Type":"ContainerStarted","Data":"e341c550b31d29eb33b1c0a71c63d307d4cc08c9d8897e30349883e45037a56e"} Feb 03 11:04:26 crc kubenswrapper[5010]: I0203 11:04:26.503529 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:26 crc kubenswrapper[5010]: E0203 11:04:26.503947 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:29 crc kubenswrapper[5010]: I0203 11:04:28.999670 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8dfa1254-0d2c-4885-a531-fc90541692e7","Type":"ContainerStarted","Data":"a348e3b9174781a806094c750543012ef2237e2d290dc5b69e33c27024d730dc"} Feb 03 11:04:29 crc kubenswrapper[5010]: I0203 11:04:29.025287 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.7439766629999998 podStartE2EDuration="5.025261051s" podCreationTimestamp="2026-02-03 11:04:24 +0000 UTC" firstStartedPulling="2026-02-03 11:04:25.549635685 +0000 UTC m=+3735.705611814" lastFinishedPulling="2026-02-03 11:04:27.830920073 +0000 UTC m=+3737.986896202" observedRunningTime="2026-02-03 11:04:29.01885253 +0000 UTC m=+3739.174828649" watchObservedRunningTime="2026-02-03 11:04:29.025261051 +0000 UTC m=+3739.181237190" Feb 03 11:04:40 crc kubenswrapper[5010]: I0203 11:04:40.514417 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:40 crc kubenswrapper[5010]: E0203 11:04:40.515275 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.777776 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.782176 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786556 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfbsh"/"kube-root-ca.crt" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786569 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hfbsh"/"default-dockercfg-d5q4j" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.786801 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hfbsh"/"openshift-service-ca.crt" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.793492 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.912476 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:50 crc kubenswrapper[5010]: I0203 11:04:50.912799 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.015995 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.016076 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.016527 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.040887 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"must-gather-hdcmp\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.105432 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.502690 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:04:51 crc kubenswrapper[5010]: E0203 11:04:51.503517 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:04:51 crc kubenswrapper[5010]: I0203 11:04:51.645255 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:04:52 crc kubenswrapper[5010]: I0203 11:04:52.243935 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"733196c23cec8a07b2e963207170368dc3a4f7a3b1625d9daceaf99fb3062f38"} Feb 03 11:04:56 crc kubenswrapper[5010]: I0203 11:04:56.303414 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} Feb 03 11:04:57 crc kubenswrapper[5010]: I0203 11:04:57.319980 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerStarted","Data":"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774"} Feb 03 11:04:57 crc kubenswrapper[5010]: I0203 11:04:57.349165 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" podStartSLOduration=3.185955369 podStartE2EDuration="7.349135453s" podCreationTimestamp="2026-02-03 11:04:50 +0000 UTC" firstStartedPulling="2026-02-03 11:04:51.627906193 +0000 UTC m=+3761.783882322" lastFinishedPulling="2026-02-03 11:04:55.791086277 +0000 UTC m=+3765.947062406" observedRunningTime="2026-02-03 11:04:57.339862332 +0000 UTC m=+3767.495838471" watchObservedRunningTime="2026-02-03 11:04:57.349135453 +0000 UTC m=+3767.505111582" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.845114 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.848075 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.972552 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:00 crc kubenswrapper[5010]: I0203 11:05:00.972752 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075418 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075535 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.075635 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.109014 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"crc-debug-knxkc\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.171420 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:05:01 crc kubenswrapper[5010]: I0203 11:05:01.379425 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerStarted","Data":"c1ab1788d82b88c9a9c9bced47ba87ac4f6c2b40b93983006b0e6ecb867d4af2"} Feb 03 11:05:04 crc kubenswrapper[5010]: I0203 11:05:04.504294 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:04 crc kubenswrapper[5010]: E0203 11:05:04.505365 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:15 crc kubenswrapper[5010]: I0203 11:05:15.504145 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:15 crc kubenswrapper[5010]: E0203 11:05:15.506755 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.144863 5010 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.145368 5010 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2n55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-knxkc_openshift-must-gather-hfbsh(dd16c451-5cc4-448a-b612-059a4c677f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.147049 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" Feb 03 11:05:17 crc kubenswrapper[5010]: E0203 11:05:17.571974 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" Feb 03 11:05:27 crc kubenswrapper[5010]: I0203 11:05:27.502409 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:27 crc kubenswrapper[5010]: E0203 11:05:27.505121 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:31 crc kubenswrapper[5010]: I0203 11:05:31.712100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerStarted","Data":"1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87"} Feb 03 11:05:31 crc kubenswrapper[5010]: I0203 11:05:31.739027 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" podStartSLOduration=1.905847362 podStartE2EDuration="31.739000879s" podCreationTimestamp="2026-02-03 11:05:00 +0000 UTC" firstStartedPulling="2026-02-03 11:05:01.243893754 +0000 UTC m=+3771.399869883" lastFinishedPulling="2026-02-03 11:05:31.077047251 +0000 UTC m=+3801.233023400" observedRunningTime="2026-02-03 11:05:31.730737003 +0000 UTC m=+3801.886713132" watchObservedRunningTime="2026-02-03 11:05:31.739000879 +0000 UTC m=+3801.894977008" Feb 03 11:05:38 crc kubenswrapper[5010]: I0203 11:05:38.502988 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:38 crc kubenswrapper[5010]: E0203 11:05:38.503756 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:05:50 crc kubenswrapper[5010]: I0203 11:05:50.513018 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:05:50 crc kubenswrapper[5010]: E0203 11:05:50.514356 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.959084 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.962152 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:00 crc kubenswrapper[5010]: I0203 11:06:00.984513 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.052969 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.053021 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.053085 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155283 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155501 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155542 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.155993 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.156531 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.178700 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"redhat-marketplace-j9shv\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.290679 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:01 crc kubenswrapper[5010]: I0203 11:06:01.838974 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:02 crc kubenswrapper[5010]: I0203 11:06:02.060717 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"5cc6fe406958620e7e04ce434688e594444745b347a57f9de5721db9cf7c2290"} Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.072703 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" exitCode=0 Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.074312 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7"} Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.075116 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:06:03 crc kubenswrapper[5010]: I0203 11:06:03.503517 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:03 crc kubenswrapper[5010]: E0203 11:06:03.504392 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:04 crc kubenswrapper[5010]: I0203 11:06:04.083733 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} Feb 03 11:06:05 crc kubenswrapper[5010]: I0203 11:06:05.096252 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" exitCode=0 Feb 03 11:06:05 crc kubenswrapper[5010]: I0203 11:06:05.096394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} Feb 03 11:06:06 crc kubenswrapper[5010]: I0203 11:06:06.109162 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerStarted","Data":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} Feb 03 11:06:06 crc kubenswrapper[5010]: I0203 11:06:06.139297 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9shv" podStartSLOduration=3.5060776970000003 podStartE2EDuration="6.139273358s" podCreationTimestamp="2026-02-03 11:06:00 +0000 UTC" firstStartedPulling="2026-02-03 11:06:03.074848749 +0000 UTC m=+3833.230824868" lastFinishedPulling="2026-02-03 11:06:05.7080444 +0000 UTC m=+3835.864020529" observedRunningTime="2026-02-03 11:06:06.132748275 +0000 UTC m=+3836.288724424" watchObservedRunningTime="2026-02-03 11:06:06.139273358 +0000 UTC m=+3836.295249487" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.291303 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.291722 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:11 crc kubenswrapper[5010]: I0203 11:06:11.343057 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:12 crc kubenswrapper[5010]: I0203 11:06:12.230686 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:12 crc kubenswrapper[5010]: I0203 11:06:12.293444 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.187008 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9shv" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" containerID="cri-o://ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" gracePeriod=2 Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.507325 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:14 crc kubenswrapper[5010]: E0203 11:06:14.507558 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.866998 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.996788 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.997205 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.997349 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") pod \"96b0797d-7099-4ce0-a9a7-063e41fce220\" (UID: \"96b0797d-7099-4ce0-a9a7-063e41fce220\") " Feb 03 11:06:14 crc kubenswrapper[5010]: I0203 11:06:14.998059 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities" (OuterVolumeSpecName: "utilities") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.014028 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69" (OuterVolumeSpecName: "kube-api-access-gll69") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "kube-api-access-gll69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.026982 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96b0797d-7099-4ce0-a9a7-063e41fce220" (UID: "96b0797d-7099-4ce0-a9a7-063e41fce220"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101021 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gll69\" (UniqueName: \"kubernetes.io/projected/96b0797d-7099-4ce0-a9a7-063e41fce220-kube-api-access-gll69\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101391 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.101532 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b0797d-7099-4ce0-a9a7-063e41fce220-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215366 5010 generic.go:334] "Generic (PLEG): container finished" podID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" exitCode=0 Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215426 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9shv" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215442 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215489 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9shv" event={"ID":"96b0797d-7099-4ce0-a9a7-063e41fce220","Type":"ContainerDied","Data":"5cc6fe406958620e7e04ce434688e594444745b347a57f9de5721db9cf7c2290"} Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.215537 5010 scope.go:117] "RemoveContainer" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.244330 5010 scope.go:117] "RemoveContainer" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.260499 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.272483 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9shv"] Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.277735 5010 scope.go:117] "RemoveContainer" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.344474 5010 scope.go:117] "RemoveContainer" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.344975 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": container with ID starting with ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8 not found: ID does not exist" containerID="ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345025 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8"} err="failed to get container status \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": rpc error: code = NotFound desc = could not find container \"ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8\": container with ID starting with ad35246c1c4136d71feb7eed7ef26d1faaf966a87dd17940f83e78258bc592e8 not found: ID does not exist" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345055 5010 scope.go:117] "RemoveContainer" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.345454 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": container with ID starting with 903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4 not found: ID does not exist" containerID="903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345503 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4"} err="failed to get container status \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": rpc error: code = NotFound desc = could not find container \"903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4\": container with ID starting with 903326d9d70f485a88c6e24a923a949831ab03ba6b183d1bfa4f835a7f60f4f4 not found: ID does not exist" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345522 5010 scope.go:117] "RemoveContainer" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: E0203 11:06:15.345862 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": container with ID starting with 43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7 not found: ID does not exist" containerID="43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7" Feb 03 11:06:15 crc kubenswrapper[5010]: I0203 11:06:15.345890 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7"} err="failed to get container status \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": rpc error: code = NotFound desc = could not find container \"43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7\": container with ID starting with 43d13dea32f096eb53a920692ae12df4fb1b47317c47714feb239e848ec608c7 not found: ID does not exist" Feb 03 11:06:16 crc kubenswrapper[5010]: I0203 11:06:16.519802 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" path="/var/lib/kubelet/pods/96b0797d-7099-4ce0-a9a7-063e41fce220/volumes" Feb 03 11:06:19 crc kubenswrapper[5010]: I0203 11:06:19.265735 5010 generic.go:334] "Generic (PLEG): container finished" podID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerID="1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87" exitCode=0 Feb 03 11:06:19 crc kubenswrapper[5010]: I0203 11:06:19.265826 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" event={"ID":"dd16c451-5cc4-448a-b612-059a4c677f3a","Type":"ContainerDied","Data":"1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87"} Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.417541 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.425113 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") pod \"dd16c451-5cc4-448a-b612-059a4c677f3a\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.433878 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55" (OuterVolumeSpecName: "kube-api-access-x2n55") pod "dd16c451-5cc4-448a-b612-059a4c677f3a" (UID: "dd16c451-5cc4-448a-b612-059a4c677f3a"). InnerVolumeSpecName "kube-api-access-x2n55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.478168 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.488630 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-knxkc"] Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.530417 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") pod \"dd16c451-5cc4-448a-b612-059a4c677f3a\" (UID: \"dd16c451-5cc4-448a-b612-059a4c677f3a\") " Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.530642 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host" (OuterVolumeSpecName: "host") pod "dd16c451-5cc4-448a-b612-059a4c677f3a" (UID: "dd16c451-5cc4-448a-b612-059a4c677f3a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.531423 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd16c451-5cc4-448a-b612-059a4c677f3a-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.531460 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2n55\" (UniqueName: \"kubernetes.io/projected/dd16c451-5cc4-448a-b612-059a4c677f3a-kube-api-access-x2n55\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:20 crc kubenswrapper[5010]: I0203 11:06:20.533160 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" path="/var/lib/kubelet/pods/dd16c451-5cc4-448a-b612-059a4c677f3a/volumes" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.293483 5010 scope.go:117] "RemoveContainer" containerID="1f9a8d3208b3a091c4939acca4f01ee3cd93e0bcc6269bf3b3f3541f7c35fd87" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.293569 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-knxkc" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658006 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658665 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-content" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658678 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-content" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658691 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658696 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658725 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-utilities" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658733 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="extract-utilities" Feb 03 11:06:21 crc kubenswrapper[5010]: E0203 11:06:21.658750 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658756 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658946 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd16c451-5cc4-448a-b612-059a4c677f3a" containerName="container-00" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.658970 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b0797d-7099-4ce0-a9a7-063e41fce220" containerName="registry-server" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.659617 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.760429 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.760526 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862475 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862606 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.862731 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.883688 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"crc-debug-dv2q8\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:21 crc kubenswrapper[5010]: I0203 11:06:21.988391 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:22 crc kubenswrapper[5010]: I0203 11:06:22.308942 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerStarted","Data":"15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3"} Feb 03 11:06:22 crc kubenswrapper[5010]: I0203 11:06:22.309348 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerStarted","Data":"79807be93a018e5e3e7ee81fcbdd530b0a73975e4ce12033454593dbc2394f7c"} Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.320828 5010 generic.go:334] "Generic (PLEG): container finished" podID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerID="15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3" exitCode=0 Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.320898 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" event={"ID":"4cc3c54a-befe-4c86-8ae8-e0759feb54be","Type":"ContainerDied","Data":"15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3"} Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.896068 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:23 crc kubenswrapper[5010]: I0203 11:06:23.906061 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-dv2q8"] Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.451294 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.617870 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") pod \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.618069 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") pod \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\" (UID: \"4cc3c54a-befe-4c86-8ae8-e0759feb54be\") " Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.618206 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host" (OuterVolumeSpecName: "host") pod "4cc3c54a-befe-4c86-8ae8-e0759feb54be" (UID: "4cc3c54a-befe-4c86-8ae8-e0759feb54be"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.619433 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4cc3c54a-befe-4c86-8ae8-e0759feb54be-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.636587 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr" (OuterVolumeSpecName: "kube-api-access-v5gcr") pod "4cc3c54a-befe-4c86-8ae8-e0759feb54be" (UID: "4cc3c54a-befe-4c86-8ae8-e0759feb54be"). InnerVolumeSpecName "kube-api-access-v5gcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:24 crc kubenswrapper[5010]: I0203 11:06:24.721794 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5gcr\" (UniqueName: \"kubernetes.io/projected/4cc3c54a-befe-4c86-8ae8-e0759feb54be-kube-api-access-v5gcr\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.096433 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:25 crc kubenswrapper[5010]: E0203 11:06:25.096976 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.096999 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.097231 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" containerName="container-00" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.098007 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.232748 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.232907 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.334573 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.335131 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.335202 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.344832 5010 scope.go:117] "RemoveContainer" containerID="15af73ffcb8b076a414d8d6a10ce7a3a25d6b8b42f27c224aa60cf656f8481d3" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.345013 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-dv2q8" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.352995 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"crc-debug-5rbtp\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: I0203 11:06:25.417829 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:25 crc kubenswrapper[5010]: W0203 11:06:25.469429 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862810dd_615e_414c_96cd_45c3e36631c5.slice/crio-2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253 WatchSource:0}: Error finding container 2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253: Status 404 returned error can't find the container with id 2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253 Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356602 5010 generic.go:334] "Generic (PLEG): container finished" podID="862810dd-615e-414c-96cd-45c3e36631c5" containerID="eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336" exitCode=0 Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356718 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" event={"ID":"862810dd-615e-414c-96cd-45c3e36631c5","Type":"ContainerDied","Data":"eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336"} Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.356970 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" event={"ID":"862810dd-615e-414c-96cd-45c3e36631c5","Type":"ContainerStarted","Data":"2de5e4e984d0eb512f2e79d5fe584e2238824bb16d10676b1e8366de87309253"} Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.428907 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.437110 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/crc-debug-5rbtp"] Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.503036 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:26 crc kubenswrapper[5010]: E0203 11:06:26.503737 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:26 crc kubenswrapper[5010]: I0203 11:06:26.522424 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc3c54a-befe-4c86-8ae8-e0759feb54be" path="/var/lib/kubelet/pods/4cc3c54a-befe-4c86-8ae8-e0759feb54be/volumes" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.469607 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.584646 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") pod \"862810dd-615e-414c-96cd-45c3e36631c5\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.584771 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") pod \"862810dd-615e-414c-96cd-45c3e36631c5\" (UID: \"862810dd-615e-414c-96cd-45c3e36631c5\") " Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.585821 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host" (OuterVolumeSpecName: "host") pod "862810dd-615e-414c-96cd-45c3e36631c5" (UID: "862810dd-615e-414c-96cd-45c3e36631c5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.599299 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf" (OuterVolumeSpecName: "kube-api-access-77lwf") pod "862810dd-615e-414c-96cd-45c3e36631c5" (UID: "862810dd-615e-414c-96cd-45c3e36631c5"). InnerVolumeSpecName "kube-api-access-77lwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.687637 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lwf\" (UniqueName: \"kubernetes.io/projected/862810dd-615e-414c-96cd-45c3e36631c5-kube-api-access-77lwf\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:27 crc kubenswrapper[5010]: I0203 11:06:27.687695 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/862810dd-615e-414c-96cd-45c3e36631c5-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.381036 5010 scope.go:117] "RemoveContainer" containerID="eb7fe92d16e697b6743828ad5dc47e1b42d862e83b10caf5c522d84c6c42c336" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.381162 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/crc-debug-5rbtp" Feb 03 11:06:28 crc kubenswrapper[5010]: I0203 11:06:28.517893 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862810dd-615e-414c-96cd-45c3e36631c5" path="/var/lib/kubelet/pods/862810dd-615e-414c-96cd-45c3e36631c5/volumes" Feb 03 11:06:40 crc kubenswrapper[5010]: I0203 11:06:40.519911 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:40 crc kubenswrapper[5010]: E0203 11:06:40.520694 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.405844 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.464430 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.602977 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.700125 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.756640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.810327 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker-log/0.log" Feb 03 11:06:44 crc kubenswrapper[5010]: I0203 11:06:44.949541 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf_2d389772-7902-4aca-8bc3-03a0708fbaa2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.045325 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-central-agent/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.172987 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/proxy-httpd/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.197678 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-notification-agent/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.209469 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/sg-core/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.409531 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api-log/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.491728 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.538586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/cinder-scheduler/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.685143 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/probe/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.746107 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5tffc_efb76028-3500-476c-adef-dfc87d2cdab7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:45 crc kubenswrapper[5010]: I0203 11:06:45.953368 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ktk67_f4e7c571-ff51-496f-81b8-2fee3f357d3f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.028575 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.179015 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.217263 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/dnsmasq-dns/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.251596 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs_96722ef6-9c22-4700-8163-b25503d014bd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.436774 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-httpd/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.469162 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-log/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.644841 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-httpd/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.696115 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-log/0.log" Feb 03 11:06:46 crc kubenswrapper[5010]: I0203 11:06:46.865838 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/1.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.142279 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.317960 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-msc5t_af6128d5-2369-4ef9-99aa-61ad0bf3b213/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.471816 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon-log/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.486884 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hz8vx_49056616-86cd-41cd-a102-1072dc2a79f4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.746554 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-675cc696d4-7wvtv_8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4/keystone-api/0.log" Feb 03 11:06:47 crc kubenswrapper[5010]: I0203 11:06:47.890757 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501941-gv4sr_96c330a2-14f4-4923-8707-6b9cce98267f/keystone-cron/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.124804 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_de374df0-0b73-4be2-9719-d4b471782ed4/kube-state-metrics/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.169188 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d_5b7ff70c-1251-4fd5-a71c-bf6703bcc85d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.603767 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-api/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.717040 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-httpd/0.log" Feb 03 11:06:48 crc kubenswrapper[5010]: I0203 11:06:48.814672 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p_4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.317547 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_26dec936-0343-4d5f-8f2b-cf2a797786b5/nova-cell0-conductor-conductor/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.343595 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-log/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.648131 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-api/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.820955 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9bd4788-ae5f-49c4-8116-04076a16f4f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.903002 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_291a9878-85fe-4988-8a7d-1da10ac49b23/nova-cell1-conductor-conductor/0.log" Feb 03 11:06:49 crc kubenswrapper[5010]: I0203 11:06:49.942706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bq7n5_6fd37dcf-e81a-491a-a5e1-01a27517d1b4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.256996 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-log/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.530478 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.574809 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28559aae-4731-4653-a466-8c6f5c6c7dcf/nova-scheduler-scheduler/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.764235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.793454 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/galera/0.log" Feb 03 11:06:50 crc kubenswrapper[5010]: I0203 11:06:50.996709 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.212461 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.217284 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/galera/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.430883 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c80632c0-72bc-461d-8e87-591d0ddbc1a8/openstackclient/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.551985 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqkq5_5235b9fc-3723-4d8a-9851-e8ee89c0b084/openstack-network-exporter/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.562472 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-metadata/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.727686 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:06:51 crc kubenswrapper[5010]: I0203 11:06:51.923896 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.016751 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovs-vswitchd/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.030498 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.194010 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ql6ht_1883c30e-4c38-468d-a5dc-91b07f167d67/ovn-controller/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.361138 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-js9ms_a3aac34b-fb9e-4853-9a1d-c311dc75f055/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.453743 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/openstack-network-exporter/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.502976 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:06:52 crc kubenswrapper[5010]: E0203 11:06:52.503548 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.563487 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/ovn-northd/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.691895 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/openstack-network-exporter/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.720679 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/ovsdbserver-nb/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.889998 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/ovsdbserver-sb/0.log" Feb 03 11:06:52 crc kubenswrapper[5010]: I0203 11:06:52.936681 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/openstack-network-exporter/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.234171 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-log/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.282812 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-api/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.287048 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.594640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/rabbitmq/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.630154 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.652331 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.802586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.823451 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/rabbitmq/0.log" Feb 03 11:06:53 crc kubenswrapper[5010]: I0203 11:06:53.903112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt_d4357ef1-04ea-4dbd-acd8-70f34a5a72a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.099882 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r8zqk_36d3f978-a301-44e6-a401-72e94c9f70ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.247706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_43ecdc43-d866-4902-89cb-0ce68e89fe05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.372966 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nm955_a9fa7d27-81da-4dcd-adef-cb22c35d2641/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.551961 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pfhx5_67a7675c-9074-4390-85ab-2bba845b2dc0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.734544 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-httpd/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.761074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-server/0.log" Feb 03 11:06:54 crc kubenswrapper[5010]: I0203 11:06:54.851704 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n8qtn_65c9ffaf-83e3-47c1-a1e8-b097b371ccec/swift-ring-rebalance/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.006182 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.053502 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.157244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-reaper/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.268405 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.269912 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.343578 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.383435 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.512977 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-updater/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.573798 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-expirer/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.629413 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-auditor/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.631791 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-replicator/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.753746 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-server/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.808808 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-updater/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.862501 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/rsync/0.log" Feb 03 11:06:55 crc kubenswrapper[5010]: I0203 11:06:55.911486 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/swift-recon-cron/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.122871 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h_7353ead1-b7ae-446c-a262-5a383b1d7e52/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.213681 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8c8d92ab-5652-4bd9-81af-fd0be7aea36f/tempest-tests-tempest-tests-runner/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.390205 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8dfa1254-0d2c-4885-a531-fc90541692e7/test-operator-logs-container/0.log" Feb 03 11:06:56 crc kubenswrapper[5010]: I0203 11:06:56.454317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7_3109739d-69b7-439a-b6c4-a8affbe0af4f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:07:04 crc kubenswrapper[5010]: I0203 11:07:04.857317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95adc2d1-1093-484e-8580-53e244b420c8/memcached/0.log" Feb 03 11:07:05 crc kubenswrapper[5010]: I0203 11:07:05.502919 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:05 crc kubenswrapper[5010]: E0203 11:07:05.503288 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.758400 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:17 crc kubenswrapper[5010]: E0203 11:07:17.759269 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.759283 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.759518 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="862810dd-615e-414c-96cd-45c3e36631c5" containerName="container-00" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.761767 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821400 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821490 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.821525 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.859717 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923731 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.923769 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.924490 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.924544 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:17 crc kubenswrapper[5010]: I0203 11:07:17.953411 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"certified-operators-6h57h\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.089804 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.630427 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.991775 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" exitCode=0 Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.991907 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f"} Feb 03 11:07:18 crc kubenswrapper[5010]: I0203 11:07:18.993557 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"6fbc71d6cf4d21787d118de08f943a41757fb79167e2a84dc014c9c9697ac8eb"} Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.127679 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.130195 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.158323 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258500 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258588 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.258815 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362019 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362163 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.362264 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.363176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.363870 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.392130 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"community-operators-rkxrd\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.447944 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:19 crc kubenswrapper[5010]: I0203 11:07:19.503555 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:19 crc kubenswrapper[5010]: E0203 11:07:19.503762 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:20 crc kubenswrapper[5010]: I0203 11:07:20.037433 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} Feb 03 11:07:20 crc kubenswrapper[5010]: I0203 11:07:20.200440 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.049916 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" exitCode=0 Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.050006 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054503 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" exitCode=0 Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054579 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a"} Feb 03 11:07:21 crc kubenswrapper[5010]: I0203 11:07:21.054721 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"221b78dd250fa3bbf0778a979aae37d7c6453448fa3f462783d5b97fb2924c8e"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.064501 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.067199 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerStarted","Data":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} Feb 03 11:07:22 crc kubenswrapper[5010]: I0203 11:07:22.112952 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6h57h" podStartSLOduration=2.648317458 podStartE2EDuration="5.112929272s" podCreationTimestamp="2026-02-03 11:07:17 +0000 UTC" firstStartedPulling="2026-02-03 11:07:18.994147915 +0000 UTC m=+3909.150124044" lastFinishedPulling="2026-02-03 11:07:21.458759729 +0000 UTC m=+3911.614735858" observedRunningTime="2026-02-03 11:07:22.110079352 +0000 UTC m=+3912.266055481" watchObservedRunningTime="2026-02-03 11:07:22.112929272 +0000 UTC m=+3912.268905411" Feb 03 11:07:23 crc kubenswrapper[5010]: I0203 11:07:23.078323 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" exitCode=0 Feb 03 11:07:23 crc kubenswrapper[5010]: I0203 11:07:23.078803 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} Feb 03 11:07:25 crc kubenswrapper[5010]: I0203 11:07:25.105176 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerStarted","Data":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} Feb 03 11:07:25 crc kubenswrapper[5010]: I0203 11:07:25.143056 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkxrd" podStartSLOduration=3.246671906 podStartE2EDuration="6.143030417s" podCreationTimestamp="2026-02-03 11:07:19 +0000 UTC" firstStartedPulling="2026-02-03 11:07:21.055965167 +0000 UTC m=+3911.211941296" lastFinishedPulling="2026-02-03 11:07:23.952323678 +0000 UTC m=+3914.108299807" observedRunningTime="2026-02-03 11:07:25.13220344 +0000 UTC m=+3915.288179599" watchObservedRunningTime="2026-02-03 11:07:25.143030417 +0000 UTC m=+3915.299006556" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.026570 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.258204 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.271636 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.392930 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.574898 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/extract/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.591187 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.592735 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.832397 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jvb56_74803e29-48a3-4667-bcdb-a94f381545b5/manager/0.log" Feb 03 11:07:26 crc kubenswrapper[5010]: I0203 11:07:26.841683 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-52g72_a7d72ea1-7126-4768-9cf8-f590ebd216d7/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.204674 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-j87lc_fd413d86-2cda-4079-a895-5cb60928a47f/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.322096 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gnxws_9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.439074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7szqs_d33dc0fd-847b-41cc-a8ac-afde40120ba2/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.604607 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-k765q_9dc494bd-d6ef-4a22-8312-67750ebb3dbe/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.845370 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-w7ldz_2f204595-5d98-4c16-b5d1-5004c6cae836/manager/0.log" Feb 03 11:07:27 crc kubenswrapper[5010]: I0203 11:07:27.926848 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vlmtm_5fafda3f-e0cd-4477-9c10-442af83a835b/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.090081 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.090362 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.158015 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.182749 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-gb8tp_1a136ea1-ab68-4f60-8fb2-969363f25337/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.185724 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-qrkwl_7f20ca5f-d244-45be-864d-3b8ad3d456ea/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.357327 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-5zbbw_42f76062-3a9d-45c1-b928-d9ca236ec8ab/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.514732 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-pwdks_4f112d60-8db7-4ec2-a82d-c7627ade05a3/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.653199 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-t47jc_21f46dec-fb01-4293-ad08-706eb63a8738/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.774562 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5lzr6_27ab6ab7-e411-466c-bc4a-97d1660c547e/manager/0.log" Feb 03 11:07:28 crc kubenswrapper[5010]: I0203 11:07:28.859653 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs_76bde002-75f6-4c4a-af3d-16aec5a221f4/manager/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.206799 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.213108 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-578f994c6c-72ld2_bde44bc9-c06a-4c2b-aad8-6f3247272024/operator/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.449436 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.450158 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.502428 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fv5km_1e93c0a0-5a7b-40d7-aaee-e31455baf139/registry-server/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.713274 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.842235 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-g8qz8_3e47047f-9303-47e2-8312-c83315e1a3ff/manager/0.log" Feb 03 11:07:29 crc kubenswrapper[5010]: I0203 11:07:29.875206 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-d99mj_8251c193-3c53-4651-87da-8b216cf907aa/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.157132 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kj7mj_2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a/operator/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.264813 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-844f879456-5ktjc_54aaeb1d-8a23-413f-b1f4-5115b167d78b/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.369653 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-mrvfq_84af1f21-c29e-4846-9ce1-ea345cbad4fc/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.471600 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pgwx2_a62d6669-692b-4909-b192-4348ac82a50d/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.516702 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rkxrd" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" probeResult="failure" output=< Feb 03 11:07:30 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:07:30 crc kubenswrapper[5010]: > Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.545503 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ck5g7_e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58/manager/0.log" Feb 03 11:07:30 crc kubenswrapper[5010]: I0203 11:07:30.713254 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-ftqqr_37a4f3fa-bbaf-433d-9835-6ac576351651/manager/0.log" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.162855 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6h57h" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" containerID="cri-o://598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" gracePeriod=2 Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.649226 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741102 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741287 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741326 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") pod \"e085b7a5-0035-41be-963b-d88937d4ddd3\" (UID: \"e085b7a5-0035-41be-963b-d88937d4ddd3\") " Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.741868 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities" (OuterVolumeSpecName: "utilities") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.748117 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm" (OuterVolumeSpecName: "kube-api-access-p79nm") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "kube-api-access-p79nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.801822 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e085b7a5-0035-41be-963b-d88937d4ddd3" (UID: "e085b7a5-0035-41be-963b-d88937d4ddd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843559 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843603 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p79nm\" (UniqueName: \"kubernetes.io/projected/e085b7a5-0035-41be-963b-d88937d4ddd3-kube-api-access-p79nm\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:31 crc kubenswrapper[5010]: I0203 11:07:31.843621 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e085b7a5-0035-41be-963b-d88937d4ddd3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174130 5010 generic.go:334] "Generic (PLEG): container finished" podID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" exitCode=0 Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174183 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174260 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6h57h" event={"ID":"e085b7a5-0035-41be-963b-d88937d4ddd3","Type":"ContainerDied","Data":"6fbc71d6cf4d21787d118de08f943a41757fb79167e2a84dc014c9c9697ac8eb"} Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174284 5010 scope.go:117] "RemoveContainer" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.174300 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6h57h" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.213625 5010 scope.go:117] "RemoveContainer" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.231472 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.242735 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6h57h"] Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.266903 5010 scope.go:117] "RemoveContainer" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311351 5010 scope.go:117] "RemoveContainer" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.311878 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": container with ID starting with 598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c not found: ID does not exist" containerID="598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311913 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c"} err="failed to get container status \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": rpc error: code = NotFound desc = could not find container \"598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c\": container with ID starting with 598626d6ab9e059ff99f14b8884e6cad4de10d7a8004768cf926c77ce1268e2c not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.311936 5010 scope.go:117] "RemoveContainer" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.312137 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": container with ID starting with 6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4 not found: ID does not exist" containerID="6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312162 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4"} err="failed to get container status \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": rpc error: code = NotFound desc = could not find container \"6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4\": container with ID starting with 6fd4c22f634db0fc88ae864cd01b6f4dd221fa0d24b2391d19db307f39023cc4 not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312179 5010 scope.go:117] "RemoveContainer" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.312453 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": container with ID starting with 91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f not found: ID does not exist" containerID="91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.312496 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f"} err="failed to get container status \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": rpc error: code = NotFound desc = could not find container \"91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f\": container with ID starting with 91e9aca0c272ab123c758c427d2541dfcc7bb20ef8009f636498eb3c6518b54f not found: ID does not exist" Feb 03 11:07:32 crc kubenswrapper[5010]: E0203 11:07:32.380236 5010 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode085b7a5_0035_41be_963b_d88937d4ddd3.slice\": RecentStats: unable to find data in memory cache]" Feb 03 11:07:32 crc kubenswrapper[5010]: I0203 11:07:32.518428 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" path="/var/lib/kubelet/pods/e085b7a5-0035-41be-963b-d88937d4ddd3/volumes" Feb 03 11:07:34 crc kubenswrapper[5010]: I0203 11:07:34.502923 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:34 crc kubenswrapper[5010]: E0203 11:07:34.503454 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.512810 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.579401 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:39 crc kubenswrapper[5010]: I0203 11:07:39.755687 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:41 crc kubenswrapper[5010]: I0203 11:07:41.269205 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkxrd" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" containerID="cri-o://e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" gracePeriod=2 Feb 03 11:07:41 crc kubenswrapper[5010]: I0203 11:07:41.849721 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035478 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035741 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.035820 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") pod \"9e992b66-8ed7-4652-811b-360f53059f2c\" (UID: \"9e992b66-8ed7-4652-811b-360f53059f2c\") " Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.043904 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p" (OuterVolumeSpecName: "kube-api-access-mhs6p") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "kube-api-access-mhs6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.044504 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities" (OuterVolumeSpecName: "utilities") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.093145 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e992b66-8ed7-4652-811b-360f53059f2c" (UID: "9e992b66-8ed7-4652-811b-360f53059f2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139201 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139296 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e992b66-8ed7-4652-811b-360f53059f2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.139313 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhs6p\" (UniqueName: \"kubernetes.io/projected/9e992b66-8ed7-4652-811b-360f53059f2c-kube-api-access-mhs6p\") on node \"crc\" DevicePath \"\"" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283506 5010 generic.go:334] "Generic (PLEG): container finished" podID="9e992b66-8ed7-4652-811b-360f53059f2c" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" exitCode=0 Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283567 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283614 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkxrd" event={"ID":"9e992b66-8ed7-4652-811b-360f53059f2c","Type":"ContainerDied","Data":"221b78dd250fa3bbf0778a979aae37d7c6453448fa3f462783d5b97fb2924c8e"} Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283635 5010 scope.go:117] "RemoveContainer" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.283634 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkxrd" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.308334 5010 scope.go:117] "RemoveContainer" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.333950 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.343747 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkxrd"] Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.347513 5010 scope.go:117] "RemoveContainer" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397019 5010 scope.go:117] "RemoveContainer" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.397659 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": container with ID starting with e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e not found: ID does not exist" containerID="e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397699 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e"} err="failed to get container status \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": rpc error: code = NotFound desc = could not find container \"e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e\": container with ID starting with e6790d62953074ea20d0f9ab3c01cbfee7d2065c871e3f0793aa1e54014e0d1e not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397732 5010 scope.go:117] "RemoveContainer" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.397956 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": container with ID starting with 9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a not found: ID does not exist" containerID="9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.397986 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a"} err="failed to get container status \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": rpc error: code = NotFound desc = could not find container \"9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a\": container with ID starting with 9871df3993621e2c07135c28cf748b6b7a1052c31af8b8652b4110c17727706a not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.398005 5010 scope.go:117] "RemoveContainer" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: E0203 11:07:42.398266 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": container with ID starting with dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a not found: ID does not exist" containerID="dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.398296 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a"} err="failed to get container status \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": rpc error: code = NotFound desc = could not find container \"dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a\": container with ID starting with dfe79353cfa463c7902bc1d3fb2701622e0bb0dc6815e900fffca02fe49e111a not found: ID does not exist" Feb 03 11:07:42 crc kubenswrapper[5010]: I0203 11:07:42.521763 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" path="/var/lib/kubelet/pods/9e992b66-8ed7-4652-811b-360f53059f2c/volumes" Feb 03 11:07:45 crc kubenswrapper[5010]: I0203 11:07:45.502637 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:45 crc kubenswrapper[5010]: E0203 11:07:45.503989 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.455640 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xcpwg_ba766e4c-056f-4be6-a4b9-05592b641f87/control-plane-machine-set-operator/0.log" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.681910 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/kube-rbac-proxy/0.log" Feb 03 11:07:55 crc kubenswrapper[5010]: I0203 11:07:55.756234 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/machine-api-operator/0.log" Feb 03 11:07:57 crc kubenswrapper[5010]: I0203 11:07:57.502897 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:07:58 crc kubenswrapper[5010]: I0203 11:07:58.467578 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.240509 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wtwpn_7746ae6f-d9a0-4bba-a7bc-4920ed478ff4/cert-manager-controller/0.log" Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.826171 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b5ngd_b9d02d93-3df5-4e4a-99b3-07329087dc2c/cert-manager-cainjector/0.log" Feb 03 11:08:13 crc kubenswrapper[5010]: I0203 11:08:13.835813 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bfc2c_26bf0193-c1b8-4018-a7e4-4429a4292dfb/cert-manager-webhook/0.log" Feb 03 11:08:32 crc kubenswrapper[5010]: I0203 11:08:32.863915 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-npjjg_a09e0456-1529-4ece-9266-d02a283d6bd1/nmstate-console-plugin/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.388890 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-55jg2_d47b696a-a1d0-4389-a099-7f375ab72f8c/nmstate-handler/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.414519 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/nmstate-metrics/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.419364 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/kube-rbac-proxy/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.645150 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2xtg6_1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a/nmstate-webhook/0.log" Feb 03 11:08:33 crc kubenswrapper[5010]: I0203 11:08:33.684647 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-frs8s_e5c85e5b-ab19-414d-97e6-767b9e01f731/nmstate-operator/0.log" Feb 03 11:09:06 crc kubenswrapper[5010]: I0203 11:09:06.750225 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/kube-rbac-proxy/0.log" Feb 03 11:09:06 crc kubenswrapper[5010]: I0203 11:09:06.934009 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/controller/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.069853 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.273557 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.283316 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.306240 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.321283 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.470350 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.518985 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.538347 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.559482 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.783385 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.785668 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.798708 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:09:07 crc kubenswrapper[5010]: I0203 11:09:07.814880 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/controller/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.007083 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.012481 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy-frr/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.015962 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr-metrics/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.244707 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dbqxw_f6ea4a71-2a4d-48cd-9dda-ba453a1c8766/frr-k8s-webhook-server/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.296520 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/reloader/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.463990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76d7f7cd57-dncnc_5ec28393-ea76-4413-a903-612126368291/manager/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.666123 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b857c8d44-88x9l_d90f33c9-1c81-4b74-a905-71aed9ecf222/webhook-server/0.log" Feb 03 11:09:08 crc kubenswrapper[5010]: I0203 11:09:08.778634 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/kube-rbac-proxy/0.log" Feb 03 11:09:09 crc kubenswrapper[5010]: I0203 11:09:09.345915 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/speaker/0.log" Feb 03 11:09:09 crc kubenswrapper[5010]: I0203 11:09:09.498556 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.084491 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.309499 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.342363 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:27 crc kubenswrapper[5010]: I0203 11:09:27.358735 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.181463 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.211361 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/extract/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.211423 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.402060 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.564955 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.611452 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.629099 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.819694 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/extract/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.835310 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:09:28 crc kubenswrapper[5010]: I0203 11:09:28.844064 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.033849 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.342457 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.347416 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.365084 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.528752 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.568944 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:09:29 crc kubenswrapper[5010]: I0203 11:09:29.856577 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.026112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.034391 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.200858 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.255041 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/registry-server/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.384921 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.464770 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.591865 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lskbc_a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe/marketplace-operator/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.930151 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/registry-server/0.log" Feb 03 11:09:30 crc kubenswrapper[5010]: I0203 11:09:30.999565 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.202400 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.264143 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.271043 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.426532 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.469907 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.539047 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.594028 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/registry-server/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.786606 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.813604 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.813715 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:31 crc kubenswrapper[5010]: I0203 11:09:31.999145 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:09:32 crc kubenswrapper[5010]: I0203 11:09:32.088325 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:09:32 crc kubenswrapper[5010]: I0203 11:09:32.586150 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/registry-server/0.log" Feb 03 11:09:54 crc kubenswrapper[5010]: E0203 11:09:54.343979 5010 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.58:40796->38.102.83.58:33647: write tcp 38.102.83.58:40796->38.102.83.58:33647: write: broken pipe Feb 03 11:10:16 crc kubenswrapper[5010]: I0203 11:10:16.390358 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:10:16 crc kubenswrapper[5010]: I0203 11:10:16.391397 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:10:46 crc kubenswrapper[5010]: I0203 11:10:46.390595 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:10:46 crc kubenswrapper[5010]: I0203 11:10:46.391672 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.392402 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.393325 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.393397 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.394444 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:11:16 crc kubenswrapper[5010]: I0203 11:11:16.394519 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" gracePeriod=600 Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225472 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" exitCode=0 Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225557 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce"} Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.225952 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} Feb 03 11:11:17 crc kubenswrapper[5010]: I0203 11:11:17.226036 5010 scope.go:117] "RemoveContainer" containerID="54aa23d9db8a8dbbf4b6fa999de5b88f9b073b5abdc5632e1606837c20d612af" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.029534 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031544 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031566 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031585 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031595 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031621 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031630 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031649 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031657 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-content" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031691 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031699 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="extract-utilities" Feb 03 11:11:25 crc kubenswrapper[5010]: E0203 11:11:25.031717 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.031725 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.032064 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e992b66-8ed7-4652-811b-360f53059f2c" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.032084 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="e085b7a5-0035-41be-963b-d88937d4ddd3" containerName="registry-server" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.034269 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.048091 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.191798 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.191941 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.192006 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294811 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.294952 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.295811 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.295840 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.320866 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"redhat-operators-5dghg\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.361547 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:25 crc kubenswrapper[5010]: I0203 11:11:25.899983 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323160 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c" exitCode=0 Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323404 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c"} Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.323481 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0"} Feb 03 11:11:26 crc kubenswrapper[5010]: I0203 11:11:26.325924 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:11:29 crc kubenswrapper[5010]: I0203 11:11:29.396614 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f"} Feb 03 11:11:30 crc kubenswrapper[5010]: I0203 11:11:30.407581 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f" exitCode=0 Feb 03 11:11:30 crc kubenswrapper[5010]: I0203 11:11:30.407844 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f"} Feb 03 11:11:31 crc kubenswrapper[5010]: I0203 11:11:31.424206 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerStarted","Data":"3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483"} Feb 03 11:11:31 crc kubenswrapper[5010]: I0203 11:11:31.462825 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dghg" podStartSLOduration=1.9570759149999999 podStartE2EDuration="6.462792619s" podCreationTimestamp="2026-02-03 11:11:25 +0000 UTC" firstStartedPulling="2026-02-03 11:11:26.325631954 +0000 UTC m=+4156.481608083" lastFinishedPulling="2026-02-03 11:11:30.831348658 +0000 UTC m=+4160.987324787" observedRunningTime="2026-02-03 11:11:31.460033101 +0000 UTC m=+4161.616009230" watchObservedRunningTime="2026-02-03 11:11:31.462792619 +0000 UTC m=+4161.618768748" Feb 03 11:11:35 crc kubenswrapper[5010]: I0203 11:11:35.361899 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:35 crc kubenswrapper[5010]: I0203 11:11:35.362603 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:36 crc kubenswrapper[5010]: I0203 11:11:36.438226 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5dghg" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" probeResult="failure" output=< Feb 03 11:11:36 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:11:36 crc kubenswrapper[5010]: > Feb 03 11:11:45 crc kubenswrapper[5010]: I0203 11:11:45.423057 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:45 crc kubenswrapper[5010]: I0203 11:11:45.499076 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.238274 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.239019 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dghg" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" containerID="cri-o://3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" gracePeriod=2 Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.626617 5010 generic.go:334] "Generic (PLEG): container finished" podID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerID="3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" exitCode=0 Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.626707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483"} Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.627097 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dghg" event={"ID":"14281e11-e2f8-462e-91e3-ad1c46fa575f","Type":"ContainerDied","Data":"c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0"} Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.627120 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c325c7a4482ba02c5a1e03254cfa81223b26a9652eb1ea3e709a042cf8c205a0" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.731421 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.907716 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.907819 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.908030 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") pod \"14281e11-e2f8-462e-91e3-ad1c46fa575f\" (UID: \"14281e11-e2f8-462e-91e3-ad1c46fa575f\") " Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.909638 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities" (OuterVolumeSpecName: "utilities") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.910460 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:47 crc kubenswrapper[5010]: I0203 11:11:47.924571 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp" (OuterVolumeSpecName: "kube-api-access-tsjjp") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "kube-api-access-tsjjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.016434 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsjjp\" (UniqueName: \"kubernetes.io/projected/14281e11-e2f8-462e-91e3-ad1c46fa575f-kube-api-access-tsjjp\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.045507 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14281e11-e2f8-462e-91e3-ad1c46fa575f" (UID: "14281e11-e2f8-462e-91e3-ad1c46fa575f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.119892 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14281e11-e2f8-462e-91e3-ad1c46fa575f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.640994 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dghg" Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.680316 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:48 crc kubenswrapper[5010]: I0203 11:11:48.693556 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dghg"] Feb 03 11:11:50 crc kubenswrapper[5010]: I0203 11:11:50.524178 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" path="/var/lib/kubelet/pods/14281e11-e2f8-462e-91e3-ad1c46fa575f/volumes" Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.676909 5010 generic.go:334] "Generic (PLEG): container finished" podID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" exitCode=0 Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.677057 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" event={"ID":"a60388dd-8e4d-463c-a5da-b210ae7c19fd","Type":"ContainerDied","Data":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} Feb 03 11:11:51 crc kubenswrapper[5010]: I0203 11:11:51.678619 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:11:52 crc kubenswrapper[5010]: I0203 11:11:52.671035 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/gather/0.log" Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.943948 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.945716 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" containerID="cri-o://d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" gracePeriod=2 Feb 03 11:12:00 crc kubenswrapper[5010]: I0203 11:12:00.956369 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hfbsh/must-gather-hdcmp"] Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.414244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/copy/0.log" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.415097 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.494923 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") pod \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.495177 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") pod \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\" (UID: \"a60388dd-8e4d-463c-a5da-b210ae7c19fd\") " Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.503158 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz" (OuterVolumeSpecName: "kube-api-access-t4jzz") pod "a60388dd-8e4d-463c-a5da-b210ae7c19fd" (UID: "a60388dd-8e4d-463c-a5da-b210ae7c19fd"). InnerVolumeSpecName "kube-api-access-t4jzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.598654 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4jzz\" (UniqueName: \"kubernetes.io/projected/a60388dd-8e4d-463c-a5da-b210ae7c19fd-kube-api-access-t4jzz\") on node \"crc\" DevicePath \"\"" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.664273 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a60388dd-8e4d-463c-a5da-b210ae7c19fd" (UID: "a60388dd-8e4d-463c-a5da-b210ae7c19fd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.700577 5010 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a60388dd-8e4d-463c-a5da-b210ae7c19fd-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.810993 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hfbsh_must-gather-hdcmp_a60388dd-8e4d-463c-a5da-b210ae7c19fd/copy/0.log" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811561 5010 generic.go:334] "Generic (PLEG): container finished" podID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" exitCode=143 Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811653 5010 scope.go:117] "RemoveContainer" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.811896 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hfbsh/must-gather-hdcmp" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.862763 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.921660 5010 scope.go:117] "RemoveContainer" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: E0203 11:12:01.931540 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": container with ID starting with d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774 not found: ID does not exist" containerID="d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.931608 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774"} err="failed to get container status \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": rpc error: code = NotFound desc = could not find container \"d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774\": container with ID starting with d0ca9d650c03f28692690ebdf474ad1d46e17199923f41abd227022ab4dd0774 not found: ID does not exist" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.931645 5010 scope.go:117] "RemoveContainer" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: E0203 11:12:01.932260 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": container with ID starting with f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96 not found: ID does not exist" containerID="f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96" Feb 03 11:12:01 crc kubenswrapper[5010]: I0203 11:12:01.932315 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96"} err="failed to get container status \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": rpc error: code = NotFound desc = could not find container \"f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96\": container with ID starting with f2f13ebeaf1eb9024b07620c88c4d5bcaf35f2cd81b46c09d7d87f5a91138b96 not found: ID does not exist" Feb 03 11:12:02 crc kubenswrapper[5010]: I0203 11:12:02.514826 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" path="/var/lib/kubelet/pods/a60388dd-8e4d-463c-a5da-b210ae7c19fd/volumes" Feb 03 11:13:16 crc kubenswrapper[5010]: I0203 11:13:16.390615 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:13:16 crc kubenswrapper[5010]: I0203 11:13:16.391198 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:13:46 crc kubenswrapper[5010]: I0203 11:13:46.390240 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:13:46 crc kubenswrapper[5010]: I0203 11:13:46.391144 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.390865 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.391714 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.391791 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.394875 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.394996 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" gracePeriod=600 Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557202 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" exitCode=0 Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557277 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"} Feb 03 11:14:16 crc kubenswrapper[5010]: I0203 11:14:16.557316 5010 scope.go:117] "RemoveContainer" containerID="ac78d23a14c3e413f9adbd91456af15e59e69a5cb21ee1b464426dbfabf685ce" Feb 03 11:14:16 crc kubenswrapper[5010]: E0203 11:14:16.683247 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:17 crc kubenswrapper[5010]: I0203 11:14:17.601521 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:17 crc kubenswrapper[5010]: E0203 11:14:17.601837 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:30 crc kubenswrapper[5010]: I0203 11:14:30.530277 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:30 crc kubenswrapper[5010]: E0203 11:14:30.531992 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:45 crc kubenswrapper[5010]: I0203 11:14:45.502115 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:14:45 crc kubenswrapper[5010]: E0203 11:14:45.503294 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.100910 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107119 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-utilities" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107229 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-utilities" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107321 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107378 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107432 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107482 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107572 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107627 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: E0203 11:14:51.107687 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-content" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.107742 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="extract-content" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108004 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="copy" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108074 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60388dd-8e4d-463c-a5da-b210ae7c19fd" containerName="gather" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.108147 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="14281e11-e2f8-462e-91e3-ad1c46fa575f" containerName="registry-server" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.109403 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.114895 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mcw6z"/"openshift-service-ca.crt" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.115147 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mcw6z"/"kube-root-ca.crt" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.121778 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mcw6z"/"default-dockercfg-qc58k" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.143892 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.197350 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.197460 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.299982 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.300067 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.300724 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:51 crc kubenswrapper[5010]: I0203 11:14:51.891556 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"must-gather-xf96m\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:52 crc kubenswrapper[5010]: I0203 11:14:52.032081 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:14:52 crc kubenswrapper[5010]: I0203 11:14:52.523633 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:14:53 crc kubenswrapper[5010]: I0203 11:14:53.097446 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c"} Feb 03 11:14:53 crc kubenswrapper[5010]: I0203 11:14:53.097962 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"05ab0abbc9679831aee8cf150363b170113cefe84ec90a83a731ed49cebad061"} Feb 03 11:14:54 crc kubenswrapper[5010]: I0203 11:14:54.111784 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerStarted","Data":"10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb"} Feb 03 11:14:54 crc kubenswrapper[5010]: I0203 11:14:54.133805 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mcw6z/must-gather-xf96m" podStartSLOduration=3.133776087 podStartE2EDuration="3.133776087s" podCreationTimestamp="2026-02-03 11:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:14:54.130631709 +0000 UTC m=+4364.286607838" watchObservedRunningTime="2026-02-03 11:14:54.133776087 +0000 UTC m=+4364.289752216" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.882741 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.884942 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.954084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:56 crc kubenswrapper[5010]: I0203 11:14:56.954302 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.056804 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.056882 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.057042 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.078467 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"crc-debug-svtxv\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: I0203 11:14:57.208166 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:14:57 crc kubenswrapper[5010]: W0203 11:14:57.249195 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a2f827_854b_449a_84ef_1056dd3f6551.slice/crio-dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e WatchSource:0}: Error finding container dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e: Status 404 returned error can't find the container with id dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.173736 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerStarted","Data":"da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b"} Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.174434 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerStarted","Data":"dfda6f0c403cfd8f1cc1440fcba1f368a2177ea3efe616ff02d08908a9af0a0e"} Feb 03 11:14:58 crc kubenswrapper[5010]: I0203 11:14:58.215669 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" podStartSLOduration=2.215646629 podStartE2EDuration="2.215646629s" podCreationTimestamp="2026-02-03 11:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 11:14:58.209152608 +0000 UTC m=+4368.365128737" watchObservedRunningTime="2026-02-03 11:14:58.215646629 +0000 UTC m=+4368.371622758" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.192149 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.194615 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.197972 5010 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.201284 5010 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.206448 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327248 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327421 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.327549 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429716 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.429909 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.432639 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.438205 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.448960 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"collect-profiles-29501955-mz8d7\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.509635 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:00 crc kubenswrapper[5010]: E0203 11:15:00.510375 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:00 crc kubenswrapper[5010]: I0203 11:15:00.524085 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:01 crc kubenswrapper[5010]: I0203 11:15:01.090691 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7"] Feb 03 11:15:01 crc kubenswrapper[5010]: I0203 11:15:01.212321 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerStarted","Data":"2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60"} Feb 03 11:15:02 crc kubenswrapper[5010]: I0203 11:15:02.226368 5010 generic.go:334] "Generic (PLEG): container finished" podID="70655f23-d08e-4b01-85a3-abe91c302928" containerID="683870c4ba048ecd94c07fb0d8aef48237fc85bc962ccb5cab622d562e3c45dd" exitCode=0 Feb 03 11:15:02 crc kubenswrapper[5010]: I0203 11:15:02.226488 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerDied","Data":"683870c4ba048ecd94c07fb0d8aef48237fc85bc962ccb5cab622d562e3c45dd"} Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.648167 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804646 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804758 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.804803 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") pod \"70655f23-d08e-4b01-85a3-abe91c302928\" (UID: \"70655f23-d08e-4b01-85a3-abe91c302928\") " Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.806649 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume" (OuterVolumeSpecName: "config-volume") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 11:15:03 crc kubenswrapper[5010]: I0203 11:15:03.806959 5010 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70655f23-d08e-4b01-85a3-abe91c302928-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.252829 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" event={"ID":"70655f23-d08e-4b01-85a3-abe91c302928","Type":"ContainerDied","Data":"2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60"} Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.253309 5010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2433ad62acce4157c35bbc328622aef6febcc5b182871e799aabbdc9fd47fa60" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.252893 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501955-mz8d7" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.385608 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.389558 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr" (OuterVolumeSpecName: "kube-api-access-tt6cr") pod "70655f23-d08e-4b01-85a3-abe91c302928" (UID: "70655f23-d08e-4b01-85a3-abe91c302928"). InnerVolumeSpecName "kube-api-access-tt6cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.436861 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt6cr\" (UniqueName: \"kubernetes.io/projected/70655f23-d08e-4b01-85a3-abe91c302928-kube-api-access-tt6cr\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.436927 5010 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70655f23-d08e-4b01-85a3-abe91c302928-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.776195 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 11:15:04 crc kubenswrapper[5010]: I0203 11:15:04.784581 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501910-7ksgb"] Feb 03 11:15:06 crc kubenswrapper[5010]: I0203 11:15:06.515153 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e554f0-be79-4c9c-974d-f25941ae930e" path="/var/lib/kubelet/pods/34e554f0-be79-4c9c-974d-f25941ae930e/volumes" Feb 03 11:15:13 crc kubenswrapper[5010]: I0203 11:15:13.503636 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:13 crc kubenswrapper[5010]: E0203 11:15:13.504833 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:23 crc kubenswrapper[5010]: I0203 11:15:23.198396 5010 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7594db59b7-8cg94" podUID="a0d01af0-abb7-4cd1-92d7-d741182948f9" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 03 11:15:24 crc kubenswrapper[5010]: I0203 11:15:24.308450 5010 scope.go:117] "RemoveContainer" containerID="50c1d73139063edd3d9e95aeb676f19fdb661e56cb93f7dad0c5a0ed756233ca" Feb 03 11:15:27 crc kubenswrapper[5010]: I0203 11:15:27.502391 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:27 crc kubenswrapper[5010]: E0203 11:15:27.503674 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:36 crc kubenswrapper[5010]: I0203 11:15:36.664002 5010 generic.go:334] "Generic (PLEG): container finished" podID="44a2f827-854b-449a-84ef-1056dd3f6551" containerID="da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b" exitCode=0 Feb 03 11:15:36 crc kubenswrapper[5010]: I0203 11:15:36.664100 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" event={"ID":"44a2f827-854b-449a-84ef-1056dd3f6551","Type":"ContainerDied","Data":"da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b"} Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.810084 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.867564 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.877996 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-svtxv"] Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.955902 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") pod \"44a2f827-854b-449a-84ef-1056dd3f6551\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956041 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host" (OuterVolumeSpecName: "host") pod "44a2f827-854b-449a-84ef-1056dd3f6551" (UID: "44a2f827-854b-449a-84ef-1056dd3f6551"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956128 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") pod \"44a2f827-854b-449a-84ef-1056dd3f6551\" (UID: \"44a2f827-854b-449a-84ef-1056dd3f6551\") " Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.956588 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/44a2f827-854b-449a-84ef-1056dd3f6551-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:37 crc kubenswrapper[5010]: I0203 11:15:37.964288 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js" (OuterVolumeSpecName: "kube-api-access-rt8js") pod "44a2f827-854b-449a-84ef-1056dd3f6551" (UID: "44a2f827-854b-449a-84ef-1056dd3f6551"). InnerVolumeSpecName "kube-api-access-rt8js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.059452 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8js\" (UniqueName: \"kubernetes.io/projected/44a2f827-854b-449a-84ef-1056dd3f6551-kube-api-access-rt8js\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.503354 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:38 crc kubenswrapper[5010]: E0203 11:15:38.503699 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.514835 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" path="/var/lib/kubelet/pods/44a2f827-854b-449a-84ef-1056dd3f6551/volumes" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.685461 5010 scope.go:117] "RemoveContainer" containerID="da5a6743ef56c67276b9a41831c4be7bccdaf47755f96146ee789a456925019b" Feb 03 11:15:38 crc kubenswrapper[5010]: I0203 11:15:38.685584 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-svtxv" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.091814 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:39 crc kubenswrapper[5010]: E0203 11:15:39.092880 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.092901 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: E0203 11:15:39.092922 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.092929 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.093144 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="70655f23-d08e-4b01-85a3-abe91c302928" containerName="collect-profiles" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.093175 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a2f827-854b-449a-84ef-1056dd3f6551" containerName="container-00" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.094068 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.185980 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.186277 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289349 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289442 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.289653 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.326193 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"crc-debug-xfzbj\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.419668 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:39 crc kubenswrapper[5010]: I0203 11:15:39.702553 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" event={"ID":"1e8c915b-848b-484b-9bea-d9b01737deb8","Type":"ContainerStarted","Data":"63813946f52060743276adfc0a668470e564cbc2eb88f2cce410e37b7f6b53fc"} Feb 03 11:15:40 crc kubenswrapper[5010]: I0203 11:15:40.714638 5010 generic.go:334] "Generic (PLEG): container finished" podID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerID="51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598" exitCode=0 Feb 03 11:15:40 crc kubenswrapper[5010]: I0203 11:15:40.714817 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" event={"ID":"1e8c915b-848b-484b-9bea-d9b01737deb8","Type":"ContainerDied","Data":"51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598"} Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.207846 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.222972 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-xfzbj"] Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.857547 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.963965 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") pod \"1e8c915b-848b-484b-9bea-d9b01737deb8\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.964823 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") pod \"1e8c915b-848b-484b-9bea-d9b01737deb8\" (UID: \"1e8c915b-848b-484b-9bea-d9b01737deb8\") " Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.965591 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host" (OuterVolumeSpecName: "host") pod "1e8c915b-848b-484b-9bea-d9b01737deb8" (UID: "1e8c915b-848b-484b-9bea-d9b01737deb8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:41 crc kubenswrapper[5010]: I0203 11:15:41.974531 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk" (OuterVolumeSpecName: "kube-api-access-9d8kk") pod "1e8c915b-848b-484b-9bea-d9b01737deb8" (UID: "1e8c915b-848b-484b-9bea-d9b01737deb8"). InnerVolumeSpecName "kube-api-access-9d8kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.067636 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d8kk\" (UniqueName: \"kubernetes.io/projected/1e8c915b-848b-484b-9bea-d9b01737deb8-kube-api-access-9d8kk\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.067704 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e8c915b-848b-484b-9bea-d9b01737deb8-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.516680 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" path="/var/lib/kubelet/pods/1e8c915b-848b-484b-9bea-d9b01737deb8/volumes" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.543320 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:42 crc kubenswrapper[5010]: E0203 11:15:42.543922 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.543948 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.544256 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8c915b-848b-484b-9bea-d9b01737deb8" containerName="container-00" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.545473 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.681084 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.681269 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.735422 5010 scope.go:117] "RemoveContainer" containerID="51fcb5bf6651fadaf5858665eb6318be90bb636a234fbb36614ef116c2582598" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.735459 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-xfzbj" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783685 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783864 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:42 crc kubenswrapper[5010]: I0203 11:15:42.783864 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.084015 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"crc-debug-k79rg\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.167302 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:43 crc kubenswrapper[5010]: W0203 11:15:43.207733 5010 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa2e74e_076a_4f5b_acf8_eb116df93c94.slice/crio-2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730 WatchSource:0}: Error finding container 2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730: Status 404 returned error can't find the container with id 2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730 Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.746492 5010 generic.go:334] "Generic (PLEG): container finished" podID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerID="406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925" exitCode=0 Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.746586 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" event={"ID":"afa2e74e-076a-4f5b-acf8-eb116df93c94","Type":"ContainerDied","Data":"406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925"} Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.747105 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" event={"ID":"afa2e74e-076a-4f5b-acf8-eb116df93c94","Type":"ContainerStarted","Data":"2c88ad76c0bf2b923d66e87f21333ac6fe94ecdca382cc7114120194b4200730"} Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.795237 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:43 crc kubenswrapper[5010]: I0203 11:15:43.804503 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/crc-debug-k79rg"] Feb 03 11:15:44 crc kubenswrapper[5010]: I0203 11:15:44.879258 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031350 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") pod \"afa2e74e-076a-4f5b-acf8-eb116df93c94\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031461 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") pod \"afa2e74e-076a-4f5b-acf8-eb116df93c94\" (UID: \"afa2e74e-076a-4f5b-acf8-eb116df93c94\") " Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.031969 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host" (OuterVolumeSpecName: "host") pod "afa2e74e-076a-4f5b-acf8-eb116df93c94" (UID: "afa2e74e-076a-4f5b-acf8-eb116df93c94"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.032512 5010 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/afa2e74e-076a-4f5b-acf8-eb116df93c94-host\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.683563 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9" (OuterVolumeSpecName: "kube-api-access-dsgv9") pod "afa2e74e-076a-4f5b-acf8-eb116df93c94" (UID: "afa2e74e-076a-4f5b-acf8-eb116df93c94"). InnerVolumeSpecName "kube-api-access-dsgv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.750488 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgv9\" (UniqueName: \"kubernetes.io/projected/afa2e74e-076a-4f5b-acf8-eb116df93c94-kube-api-access-dsgv9\") on node \"crc\" DevicePath \"\"" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.771638 5010 scope.go:117] "RemoveContainer" containerID="406e4918c67a9656dc6cdcdad3d111483dbc23ef9b81287c1855292c83442925" Feb 03 11:15:45 crc kubenswrapper[5010]: I0203 11:15:45.771722 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/crc-debug-k79rg" Feb 03 11:15:46 crc kubenswrapper[5010]: I0203 11:15:46.516048 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" path="/var/lib/kubelet/pods/afa2e74e-076a-4f5b-acf8-eb116df93c94/volumes" Feb 03 11:15:51 crc kubenswrapper[5010]: I0203 11:15:51.503515 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:15:51 crc kubenswrapper[5010]: E0203 11:15:51.504645 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:06 crc kubenswrapper[5010]: I0203 11:16:06.503135 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:06 crc kubenswrapper[5010]: E0203 11:16:06.504579 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:20 crc kubenswrapper[5010]: I0203 11:16:20.508687 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:20 crc kubenswrapper[5010]: E0203 11:16:20.511056 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.800199 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:21 crc kubenswrapper[5010]: E0203 11:16:21.802467 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.802589 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.802915 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa2e74e-076a-4f5b-acf8-eb116df93c94" containerName="container-00" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.805705 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.814841 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957073 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957523 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:21 crc kubenswrapper[5010]: I0203 11:16:21.957830 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060314 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060418 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.060459 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.061372 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.061734 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.087560 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"redhat-marketplace-w8svr\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.145474 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:22 crc kubenswrapper[5010]: I0203 11:16:22.716883 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250674 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" exitCode=0 Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250716 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a"} Feb 03 11:16:23 crc kubenswrapper[5010]: I0203 11:16:23.250900 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"a625a88c3772c4a6e67478d73e58636fba9dd936e9e8c89dbafdce51c27cd0d3"} Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.601929 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.843786 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6f67746f54-2l6b9_3bab826b-af5f-4bd1-a68a-0bdda5f89d80/barbican-api-log/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.936404 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener-log/0.log" Feb 03 11:16:24 crc kubenswrapper[5010]: I0203 11:16:24.959669 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-85855ff49d-76x8k_f377630f-64f3-4fd9-8449-53d739d775c2/barbican-keystone-listener/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.271537 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.800091 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.835191 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bdd746887-zr9j6_4cb276c1-b6b3-45ef-84be-8bae1d46d9d7/barbican-worker-log/0.log" Feb 03 11:16:25 crc kubenswrapper[5010]: I0203 11:16:25.903326 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-n5mzf_2d389772-7902-4aca-8bc3-03a0708fbaa2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.083061 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-central-agent/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.147607 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/ceilometer-notification-agent/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.190274 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/proxy-httpd/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.285441 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" exitCode=0 Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.285502 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.296900 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fe58e747-c39e-4370-93bc-f72f8c5ee95a/sg-core/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.447226 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api-log/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.449399 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_7e079d37-86a2-4be8-a16b-821095c780f0/cinder-api/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.669342 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/probe/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.716002 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_63ed8c2d-6ac3-4a61-8e4c-1601efeca708/cinder-scheduler/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.802592 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5tffc_efb76028-3500-476c-adef-dfc87d2cdab7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:26 crc kubenswrapper[5010]: I0203 11:16:26.943121 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ktk67_f4e7c571-ff51-496f-81b8-2fee3f357d3f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.052785 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.297688 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/init/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.298394 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerStarted","Data":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.379078 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8svr" podStartSLOduration=2.649709281 podStartE2EDuration="6.379052465s" podCreationTimestamp="2026-02-03 11:16:21 +0000 UTC" firstStartedPulling="2026-02-03 11:16:23.253043231 +0000 UTC m=+4453.409019360" lastFinishedPulling="2026-02-03 11:16:26.982386415 +0000 UTC m=+4457.138362544" observedRunningTime="2026-02-03 11:16:27.330748068 +0000 UTC m=+4457.486724207" watchObservedRunningTime="2026-02-03 11:16:27.379052465 +0000 UTC m=+4457.535028594" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.401074 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-845df_3d935acc-a244-4c1f-a9f8-9924fa8b61f1/dnsmasq-dns/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.424070 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgcrs_96722ef6-9c22-4700-8163-b25503d014bd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.625755 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-log/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.688876 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1769cccf-496c-4370-8e08-e1f156fecd77/glance-httpd/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.959416 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-httpd/0.log" Feb 03 11:16:27 crc kubenswrapper[5010]: I0203 11:16:27.960183 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9df7182f-e3e9-40bf-bfb2-b2e9ef64f90a/glance-log/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.166462 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/1.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.309118 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-msc5t_af6128d5-2369-4ef9-99aa-61ad0bf3b213/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.391907 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.661706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cc988db4-2mpfb_2fedcc57-b16c-4177-a10e-f627269b4adb/horizon-log/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.695306 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hz8vx_49056616-86cd-41cd-a102-1072dc2a79f4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:28 crc kubenswrapper[5010]: I0203 11:16:28.973360 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501941-gv4sr_96c330a2-14f4-4923-8707-6b9cce98267f/keystone-cron/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.008643 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-675cc696d4-7wvtv_8ec2b13f-b7ea-4bd0-903b-d7a633e1f9f4/keystone-api/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.174409 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_de374df0-0b73-4be2-9719-d4b471782ed4/kube-state-metrics/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.270028 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dgj8d_5b7ff70c-1251-4fd5-a71c-bf6703bcc85d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.690864 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-httpd/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.739115 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78c78c7889-r9575_158ac65e-849e-4f85-a4b6-1ac4bde1a1ec/neutron-api/0.log" Feb 03 11:16:29 crc kubenswrapper[5010]: I0203 11:16:29.823422 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zn64p_4451ba2d-33ae-4e6f-b14a-2a2673c2fe3e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.324654 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-log/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.415228 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_26dec936-0343-4d5f-8f2b-cf2a797786b5/nova-cell0-conductor-conductor/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.868309 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_291a9878-85fe-4988-8a7d-1da10ac49b23/nova-cell1-conductor-conductor/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.879160 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aba2689d-cd13-4601-ac45-69409c411839/nova-api-api/0.log" Feb 03 11:16:30 crc kubenswrapper[5010]: I0203 11:16:30.884712 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9bd4788-ae5f-49c4-8116-04076a16f4f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.123672 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bq7n5_6fd37dcf-e81a-491a-a5e1-01a27517d1b4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.328475 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-log/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.638112 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_28559aae-4731-4653-a466-8c6f5c6c7dcf/nova-scheduler-scheduler/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.663392 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.850454 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/mysql-bootstrap/0.log" Feb 03 11:16:31 crc kubenswrapper[5010]: I0203 11:16:31.961586 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_87eb5dd8-7171-457a-8a95-eda98893319a/galera/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.108040 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.145601 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.145654 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.206104 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.357444 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/galera/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.362822 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_449f0b91-9186-4a16-b1b4-7f199b57a428/mysql-bootstrap/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.410280 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.495857 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.615317 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c80632c0-72bc-461d-8e87-591d0ddbc1a8/openstackclient/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.735046 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqkq5_5235b9fc-3723-4d8a-9851-e8ee89c0b084/openstack-network-exporter/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.932988 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:16:32 crc kubenswrapper[5010]: I0203 11:16:32.954509 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_edaaf3a7-a254-4a29-875a-643e46308f33/nova-metadata-metadata/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.459021 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.502712 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:33 crc kubenswrapper[5010]: E0203 11:16:33.503040 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.522490 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovsdb-server-init/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.525141 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-krnr5_b2780eb3-7b7a-47fe-bda0-2605419df774/ovs-vswitchd/0.log" Feb 03 11:16:33 crc kubenswrapper[5010]: I0203 11:16:33.699517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ql6ht_1883c30e-4c38-468d-a5dc-91b07f167d67/ovn-controller/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.370002 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8svr" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" containerID="cri-o://9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" gracePeriod=2 Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.589244 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/openstack-network-exporter/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.625151 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-js9ms_a3aac34b-fb9e-4853-9a1d-c311dc75f055/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.869013 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5158e153-9918-4fce-8f2f-75a87b96562b/ovn-northd/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.936206 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/openstack-network-exporter/0.log" Feb 03 11:16:34 crc kubenswrapper[5010]: I0203 11:16:34.949655 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.022630 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6d6abf1f-9905-4f96-8d44-d7ef3f9f299d/ovsdbserver-nb/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048724 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048924 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.048967 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") pod \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\" (UID: \"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774\") " Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.049778 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities" (OuterVolumeSpecName: "utilities") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.056527 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd" (OuterVolumeSpecName: "kube-api-access-cz5xd") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "kube-api-access-cz5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.075032 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" (UID: "8a446ddb-d2f5-4eaf-8be0-2d051c4e6774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151520 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151568 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.151585 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz5xd\" (UniqueName: \"kubernetes.io/projected/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774-kube-api-access-cz5xd\") on node \"crc\" DevicePath \"\"" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.195467 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/openstack-network-exporter/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.284420 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6dfa0a64-db8a-457a-8eff-f27ffa8e02ce/ovsdbserver-sb/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380704 5010 generic.go:334] "Generic (PLEG): container finished" podID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" exitCode=0 Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380776 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380808 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8svr" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380825 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8svr" event={"ID":"8a446ddb-d2f5-4eaf-8be0-2d051c4e6774","Type":"ContainerDied","Data":"a625a88c3772c4a6e67478d73e58636fba9dd936e9e8c89dbafdce51c27cd0d3"} Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.380866 5010 scope.go:117] "RemoveContainer" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.417448 5010 scope.go:117] "RemoveContainer" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.431662 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.441824 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8svr"] Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.443754 5010 scope.go:117] "RemoveContainer" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503098 5010 scope.go:117] "RemoveContainer" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.503535 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": container with ID starting with 9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e not found: ID does not exist" containerID="9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503593 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e"} err="failed to get container status \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": rpc error: code = NotFound desc = could not find container \"9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e\": container with ID starting with 9cb2d58188fb8822776f096601deece4f26f1bba6a86c527de890733973b1c6e not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503662 5010 scope.go:117] "RemoveContainer" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.503946 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": container with ID starting with 6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5 not found: ID does not exist" containerID="6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503975 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5"} err="failed to get container status \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": rpc error: code = NotFound desc = could not find container \"6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5\": container with ID starting with 6e7d114f087a9f8bbe826a9b9ddb87ea49927051ff280e3c70635c184504fca5 not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.503993 5010 scope.go:117] "RemoveContainer" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: E0203 11:16:35.504201 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": container with ID starting with 64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a not found: ID does not exist" containerID="64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.504274 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a"} err="failed to get container status \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": rpc error: code = NotFound desc = could not find container \"64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a\": container with ID starting with 64748690ace80dc376f2cdc62838e4d8d9449a8a1101e3d0a945d61fc654c51a not found: ID does not exist" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.757988 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-api/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.765382 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bc6c5cf68-f9b4p_3ecd94c1-1faa-4acd-aa24-dd54388d2d99/placement-log/0.log" Feb 03 11:16:35 crc kubenswrapper[5010]: I0203 11:16:35.777869 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.001042 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/rabbitmq/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.008390 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9044f36b-9c2b-47bf-b1a3-46c14c6ec5cf/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.102057 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.308958 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/setup-container/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.364231 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_543f315d-d2f8-497f-a2c1-1a929c1611be/rabbitmq/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.373419 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qpxpt_d4357ef1-04ea-4dbd-acd8-70f34a5a72a1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:36 crc kubenswrapper[5010]: I0203 11:16:36.516582 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" path="/var/lib/kubelet/pods/8a446ddb-d2f5-4eaf-8be0-2d051c4e6774/volumes" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.205979 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-r8zqk_36d3f978-a301-44e6-a401-72e94c9f70ad/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.247910 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mg749_43ecdc43-d866-4902-89cb-0ce68e89fe05/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.517951 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nm955_a9fa7d27-81da-4dcd-adef-cb22c35d2641/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.561034 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pfhx5_67a7675c-9074-4390-85ab-2bba845b2dc0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.870226 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-server/0.log" Feb 03 11:16:37 crc kubenswrapper[5010]: I0203 11:16:37.993197 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n8qtn_65c9ffaf-83e3-47c1-a1e8-b097b371ccec/swift-ring-rebalance/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.018692 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7594db59b7-8cg94_a0d01af0-abb7-4cd1-92d7-d741182948f9/proxy-httpd/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.180954 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.302517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-reaper/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.397189 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.399411 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/account-replicator/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.412744 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.579093 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-replicator/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.627647 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.643238 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/container-updater/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.668300 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-auditor/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.907360 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-server/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.908990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-updater/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.913930 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-expirer/0.log" Feb 03 11:16:38 crc kubenswrapper[5010]: I0203 11:16:38.935393 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/object-replicator/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.095743 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/rsync/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.100348 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b58c504-f707-43fe-91ca-4328c58e998c/swift-recon-cron/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.259838 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f4b6h_7353ead1-b7ae-446c-a262-5a383b1d7e52/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.453901 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_8c8d92ab-5652-4bd9-81af-fd0be7aea36f/tempest-tests-tempest-tests-runner/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.495808 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8dfa1254-0d2c-4885-a531-fc90541692e7/test-operator-logs-container/0.log" Feb 03 11:16:39 crc kubenswrapper[5010]: I0203 11:16:39.715687 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4k7r7_3109739d-69b7-439a-b6c4-a8affbe0af4f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 11:16:45 crc kubenswrapper[5010]: I0203 11:16:45.501798 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:45 crc kubenswrapper[5010]: E0203 11:16:45.502636 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:16:49 crc kubenswrapper[5010]: I0203 11:16:49.170508 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95adc2d1-1093-484e-8580-53e244b420c8/memcached/0.log" Feb 03 11:16:56 crc kubenswrapper[5010]: I0203 11:16:56.502877 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:16:56 crc kubenswrapper[5010]: E0203 11:16:56.504155 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:09 crc kubenswrapper[5010]: I0203 11:17:09.502972 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:09 crc kubenswrapper[5010]: E0203 11:17:09.506586 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.118549 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.322282 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.344729 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.364427 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.522581 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/pull/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.541438 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/extract/0.log" Feb 03 11:17:10 crc kubenswrapper[5010]: I0203 11:17:10.552072 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2849e1fa4d4c7ae48179c158d654d637d9517d3014fb1e8b58ecd598c6x9khc_878224e8-6bbb-4b7f-9aff-b2bf21eef4bb/util/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.486377 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-52g72_a7d72ea1-7126-4768-9cf8-f590ebd216d7/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.504873 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jvb56_74803e29-48a3-4667-bcdb-a94f381545b5/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.699634 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-j87lc_fd413d86-2cda-4079-a895-5cb60928a47f/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.813469 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gnxws_9fa8a872-8dc5-4e6d-838a-5dc54e6d4bbe/manager/0.log" Feb 03 11:17:11 crc kubenswrapper[5010]: I0203 11:17:11.928687 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-7szqs_d33dc0fd-847b-41cc-a8ac-afde40120ba2/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.043179 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-k765q_9dc494bd-d6ef-4a22-8312-67750ebb3dbe/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.244049 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-w7ldz_2f204595-5d98-4c16-b5d1-5004c6cae836/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.339583 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-vlmtm_5fafda3f-e0cd-4477-9c10-442af83a835b/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.527615 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-qrkwl_7f20ca5f-d244-45be-864d-3b8ad3d456ea/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.565149 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-gb8tp_1a136ea1-ab68-4f60-8fb2-969363f25337/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.768700 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-5zbbw_42f76062-3a9d-45c1-b928-d9ca236ec8ab/manager/0.log" Feb 03 11:17:12 crc kubenswrapper[5010]: I0203 11:17:12.877518 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-pwdks_4f112d60-8db7-4ec2-a82d-c7627ade05a3/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.112377 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-t47jc_21f46dec-fb01-4293-ad08-706eb63a8738/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.117251 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-5lzr6_27ab6ab7-e411-466c-bc4a-97d1660c547e/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.305147 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dpb2vs_76bde002-75f6-4c4a-af3d-16aec5a221f4/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.464009 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-578f994c6c-72ld2_bde44bc9-c06a-4c2b-aad8-6f3247272024/operator/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.659692 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fv5km_1e93c0a0-5a7b-40d7-aaee-e31455baf139/registry-server/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.913169 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-g8qz8_3e47047f-9303-47e2-8312-c83315e1a3ff/manager/0.log" Feb 03 11:17:13 crc kubenswrapper[5010]: I0203 11:17:13.945718 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-d99mj_8251c193-3c53-4651-87da-8b216cf907aa/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.133583 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kj7mj_2cbbe9fa-4c61-41fc-9a62-41dbaea09a0a/operator/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.241843 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-mrvfq_84af1f21-c29e-4846-9ce1-ea345cbad4fc/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.479260 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-pgwx2_a62d6669-692b-4909-b192-4348ac82a50d/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.497122 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ck5g7_e51fff09-23b1-4bf0-b4e2-eeb2e6ee3c58/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.672026 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-844f879456-5ktjc_54aaeb1d-8a23-413f-b1f4-5115b167d78b/manager/0.log" Feb 03 11:17:14 crc kubenswrapper[5010]: I0203 11:17:14.744595 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-ftqqr_37a4f3fa-bbaf-433d-9835-6ac576351651/manager/0.log" Feb 03 11:17:24 crc kubenswrapper[5010]: I0203 11:17:24.502405 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:24 crc kubenswrapper[5010]: E0203 11:17:24.503417 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:35 crc kubenswrapper[5010]: I0203 11:17:35.503360 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:35 crc kubenswrapper[5010]: E0203 11:17:35.506053 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.411386 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xcpwg_ba766e4c-056f-4be6-a4b9-05592b641f87/control-plane-machine-set-operator/0.log" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.721332 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/kube-rbac-proxy/0.log" Feb 03 11:17:37 crc kubenswrapper[5010]: I0203 11:17:37.744105 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5mq4r_dc73dc6e-53ff-48b8-932e-d5aeb839f2dd/machine-api-operator/0.log" Feb 03 11:17:47 crc kubenswrapper[5010]: I0203 11:17:47.503153 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:47 crc kubenswrapper[5010]: E0203 11:17:47.504508 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.594312 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-wtwpn_7746ae6f-d9a0-4bba-a7bc-4920ed478ff4/cert-manager-controller/0.log" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.778902 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-b5ngd_b9d02d93-3df5-4e4a-99b3-07329087dc2c/cert-manager-cainjector/0.log" Feb 03 11:17:52 crc kubenswrapper[5010]: I0203 11:17:52.867396 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bfc2c_26bf0193-c1b8-4018-a7e4-4429a4292dfb/cert-manager-webhook/0.log" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.337631 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340291 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-utilities" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340350 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-utilities" Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340364 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340373 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: E0203 11:17:55.340401 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-content" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340406 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="extract-content" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.340641 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a446ddb-d2f5-4eaf-8be0-2d051c4e6774" containerName="registry-server" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.344761 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.377423 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437160 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437279 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.437345 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539467 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539652 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.539757 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.540093 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.540282 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.580462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"certified-operators-89hxw\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:55 crc kubenswrapper[5010]: I0203 11:17:55.684168 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:17:56 crc kubenswrapper[5010]: I0203 11:17:56.210184 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:17:56 crc kubenswrapper[5010]: I0203 11:17:56.265611 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerStarted","Data":"fb6b55f00f377b2ed89fbe28d48708a548b28708983b2041366114f9dd31d5da"} Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.277270 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" exitCode=0 Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.277324 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74"} Feb 03 11:17:57 crc kubenswrapper[5010]: I0203 11:17:57.280632 5010 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.299785 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" exitCode=0 Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.299884 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812"} Feb 03 11:17:59 crc kubenswrapper[5010]: I0203 11:17:59.502499 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:17:59 crc kubenswrapper[5010]: E0203 11:17:59.502820 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:01 crc kubenswrapper[5010]: I0203 11:18:01.323992 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerStarted","Data":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} Feb 03 11:18:01 crc kubenswrapper[5010]: I0203 11:18:01.354457 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89hxw" podStartSLOduration=3.862059736 podStartE2EDuration="6.354421747s" podCreationTimestamp="2026-02-03 11:17:55 +0000 UTC" firstStartedPulling="2026-02-03 11:17:57.280032474 +0000 UTC m=+4547.436008623" lastFinishedPulling="2026-02-03 11:17:59.772394505 +0000 UTC m=+4549.928370634" observedRunningTime="2026-02-03 11:18:01.34587933 +0000 UTC m=+4551.501855459" watchObservedRunningTime="2026-02-03 11:18:01.354421747 +0000 UTC m=+4551.510397876" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.684327 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.684780 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:05 crc kubenswrapper[5010]: I0203 11:18:05.732631 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:06 crc kubenswrapper[5010]: I0203 11:18:06.446002 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:06 crc kubenswrapper[5010]: I0203 11:18:06.513044 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:08 crc kubenswrapper[5010]: I0203 11:18:08.425180 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-89hxw" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" containerID="cri-o://05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" gracePeriod=2 Feb 03 11:18:08 crc kubenswrapper[5010]: I0203 11:18:08.804568 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-npjjg_a09e0456-1529-4ece-9266-d02a283d6bd1/nmstate-console-plugin/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.010689 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.054928 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.055066 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.055302 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") pod \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\" (UID: \"d167930a-e7f9-4572-b3f5-050ef9b2ba5b\") " Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.056242 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities" (OuterVolumeSpecName: "utilities") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.076661 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq" (OuterVolumeSpecName: "kube-api-access-5plbq") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "kube-api-access-5plbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.159566 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5plbq\" (UniqueName: \"kubernetes.io/projected/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-kube-api-access-5plbq\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.159646 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.223398 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-55jg2_d47b696a-a1d0-4389-a099-7f375ab72f8c/nmstate-handler/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.346033 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/kube-rbac-proxy/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.453665 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hl7ls_552fa369-352c-4690-aa39-f0364021feae/nmstate-metrics/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.460704 5010 generic.go:334] "Generic (PLEG): container finished" podID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" exitCode=0 Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.460790 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462543 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89hxw" event={"ID":"d167930a-e7f9-4572-b3f5-050ef9b2ba5b","Type":"ContainerDied","Data":"fb6b55f00f377b2ed89fbe28d48708a548b28708983b2041366114f9dd31d5da"} Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462604 5010 scope.go:117] "RemoveContainer" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.462711 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89hxw" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.523912 5010 scope.go:117] "RemoveContainer" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.605464 5010 scope.go:117] "RemoveContainer" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.662934 5010 scope.go:117] "RemoveContainer" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.666478 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": container with ID starting with 05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16 not found: ID does not exist" containerID="05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.666551 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16"} err="failed to get container status \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": rpc error: code = NotFound desc = could not find container \"05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16\": container with ID starting with 05958169b1ff6ef390e33cc7cbfd43c9c725a79cd08957ec41541dfd67b36f16 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.666596 5010 scope.go:117] "RemoveContainer" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.672977 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": container with ID starting with 6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812 not found: ID does not exist" containerID="6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.673042 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812"} err="failed to get container status \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": rpc error: code = NotFound desc = could not find container \"6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812\": container with ID starting with 6afb764f8871d7cb1c5cc4aa2c30725d8fcbb88fff2cbc3ce63a8d9eb3489812 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.673078 5010 scope.go:117] "RemoveContainer" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: E0203 11:18:09.676611 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": container with ID starting with 2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74 not found: ID does not exist" containerID="2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.676642 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74"} err="failed to get container status \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": rpc error: code = NotFound desc = could not find container \"2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74\": container with ID starting with 2310ea87c7a1ec4068ebcd6b6d595874523381b62a1774ab67e74c04cf81ae74 not found: ID does not exist" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.761957 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-frs8s_e5c85e5b-ab19-414d-97e6-767b9e01f731/nmstate-operator/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.843403 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-2xtg6_1336bbfa-f4c5-4e35-9b48-d0e8df8f3e7a/nmstate-webhook/0.log" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.867029 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d167930a-e7f9-4572-b3f5-050ef9b2ba5b" (UID: "d167930a-e7f9-4572-b3f5-050ef9b2ba5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:09 crc kubenswrapper[5010]: I0203 11:18:09.934962 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d167930a-e7f9-4572-b3f5-050ef9b2ba5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.097617 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.110688 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-89hxw"] Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.518305 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" path="/var/lib/kubelet/pods/d167930a-e7f9-4572-b3f5-050ef9b2ba5b/volumes" Feb 03 11:18:10 crc kubenswrapper[5010]: I0203 11:18:10.520177 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:10 crc kubenswrapper[5010]: E0203 11:18:10.520594 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:21 crc kubenswrapper[5010]: I0203 11:18:21.503329 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:21 crc kubenswrapper[5010]: E0203 11:18:21.504273 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.498697 5010 scope.go:117] "RemoveContainer" containerID="3bd849a4e703cdb76aecc93972aa5f7990799fc9bee08fac17023aef5ff87483" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.519711 5010 scope.go:117] "RemoveContainer" containerID="2edd458b2cfaa2b6e29690d9b6dedd98ec6688b7df796df1d92ea15b8aa6954c" Feb 03 11:18:24 crc kubenswrapper[5010]: I0203 11:18:24.598099 5010 scope.go:117] "RemoveContainer" containerID="306bee7e759854f6a192fe0ffdf5df25e12e0a3028ac1c2be5e4c36d51b30a5f" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.482031 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483060 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483082 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483107 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-content" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483116 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-content" Feb 03 11:18:32 crc kubenswrapper[5010]: E0203 11:18:32.483128 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-utilities" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483138 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="extract-utilities" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.483393 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="d167930a-e7f9-4572-b3f5-050ef9b2ba5b" containerName="registry-server" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.485086 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.496861 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.568723 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.568897 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.569038 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671683 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671838 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.671978 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.672542 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:32 crc kubenswrapper[5010]: I0203 11:18:32.672557 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:33 crc kubenswrapper[5010]: I0203 11:18:33.382981 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"community-operators-x2rfb\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:33 crc kubenswrapper[5010]: I0203 11:18:33.537728 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.260237 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759411 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" exitCode=0 Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759625 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d"} Feb 03 11:18:34 crc kubenswrapper[5010]: I0203 11:18:34.759673 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerStarted","Data":"2bff1107a7587f0594df99e45b328a27a6bd5035f60166a6aa071a85d2d649db"} Feb 03 11:18:35 crc kubenswrapper[5010]: I0203 11:18:35.504301 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:35 crc kubenswrapper[5010]: E0203 11:18:35.505685 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:36 crc kubenswrapper[5010]: I0203 11:18:36.781168 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" exitCode=0 Feb 03 11:18:36 crc kubenswrapper[5010]: I0203 11:18:36.781484 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c"} Feb 03 11:18:37 crc kubenswrapper[5010]: I0203 11:18:37.809707 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerStarted","Data":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} Feb 03 11:18:37 crc kubenswrapper[5010]: I0203 11:18:37.835956 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2rfb" podStartSLOduration=3.17349481 podStartE2EDuration="5.835925248s" podCreationTimestamp="2026-02-03 11:18:32 +0000 UTC" firstStartedPulling="2026-02-03 11:18:34.761603314 +0000 UTC m=+4584.917579443" lastFinishedPulling="2026-02-03 11:18:37.424033762 +0000 UTC m=+4587.580009881" observedRunningTime="2026-02-03 11:18:37.830200103 +0000 UTC m=+4587.986176242" watchObservedRunningTime="2026-02-03 11:18:37.835925248 +0000 UTC m=+4587.991901387" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.538265 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.538935 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.624339 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:43 crc kubenswrapper[5010]: I0203 11:18:43.930200 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:44 crc kubenswrapper[5010]: I0203 11:18:44.007681 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:44 crc kubenswrapper[5010]: I0203 11:18:44.951677 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/kube-rbac-proxy/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.080759 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lpqgh_19f856e9-2325-41eb-8ed3-4daff562e84a/controller/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.206486 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.418721 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.466548 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.466997 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.475407 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.819636 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.839470 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.866899 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.869268 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:45 crc kubenswrapper[5010]: I0203 11:18:45.886692 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2rfb" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" containerID="cri-o://a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" gracePeriod=2 Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.074835 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-frr-files/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.127014 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-reloader/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.182239 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/controller/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.205826 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/cp-metrics/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.376094 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.462343 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr-metrics/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.484685 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494081 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494275 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.494353 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") pod \"9526d09e-786a-4d86-a688-e4afe9b32bfe\" (UID: \"9526d09e-786a-4d86-a688-e4afe9b32bfe\") " Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.497787 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities" (OuterVolumeSpecName: "utilities") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.502411 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:46 crc kubenswrapper[5010]: E0203 11:18:46.503104 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.505478 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/kube-rbac-proxy-frr/0.log" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.599824 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.889643 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9" (OuterVolumeSpecName: "kube-api-access-hrvl9") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "kube-api-access-hrvl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.907643 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrvl9\" (UniqueName: \"kubernetes.io/projected/9526d09e-786a-4d86-a688-e4afe9b32bfe-kube-api-access-hrvl9\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910722 5010 generic.go:334] "Generic (PLEG): container finished" podID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" exitCode=0 Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910785 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910824 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2rfb" event={"ID":"9526d09e-786a-4d86-a688-e4afe9b32bfe","Type":"ContainerDied","Data":"2bff1107a7587f0594df99e45b328a27a6bd5035f60166a6aa071a85d2d649db"} Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.910849 5010 scope.go:117] "RemoveContainer" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:46 crc kubenswrapper[5010]: I0203 11:18:46.911131 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2rfb" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.019167 5010 scope.go:117] "RemoveContainer" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.098396 5010 scope.go:117] "RemoveContainer" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.144171 5010 scope.go:117] "RemoveContainer" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.157445 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": container with ID starting with a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1 not found: ID does not exist" containerID="a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.157505 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1"} err="failed to get container status \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": rpc error: code = NotFound desc = could not find container \"a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1\": container with ID starting with a987d0a7eb433870929af7eb258cc7e562f1a8f4f7c3b90055f9c6789bb10bb1 not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.157541 5010 scope.go:117] "RemoveContainer" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.158166 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": container with ID starting with 709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c not found: ID does not exist" containerID="709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158199 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c"} err="failed to get container status \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": rpc error: code = NotFound desc = could not find container \"709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c\": container with ID starting with 709289f36bf47f2729d6ffdaf061b4224d332c9018fd9e342bad19397d4f1d1c not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158228 5010 scope.go:117] "RemoveContainer" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: E0203 11:18:47.158859 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": container with ID starting with 95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d not found: ID does not exist" containerID="95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.158894 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d"} err="failed to get container status \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": rpc error: code = NotFound desc = could not find container \"95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d\": container with ID starting with 95f76f669fdc4f4397ea034bc58d0d6c6368ea07265cb288d94cc6600da47f2d not found: ID does not exist" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.187417 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9526d09e-786a-4d86-a688-e4afe9b32bfe" (UID: "9526d09e-786a-4d86-a688-e4afe9b32bfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.216297 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9526d09e-786a-4d86-a688-e4afe9b32bfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.245076 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dbqxw_f6ea4a71-2a4d-48cd-9dda-ba453a1c8766/frr-k8s-webhook-server/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.266719 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.280586 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2rfb"] Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.290472 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/reloader/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.693567 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76d7f7cd57-dncnc_5ec28393-ea76-4413-a903-612126368291/manager/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.809938 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2lwr2_4be4374d-ae5a-4c2a-abba-b1cfea5dcbd5/frr/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.872663 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b857c8d44-88x9l_d90f33c9-1c81-4b74-a905-71aed9ecf222/webhook-server/0.log" Feb 03 11:18:47 crc kubenswrapper[5010]: I0203 11:18:47.930795 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/kube-rbac-proxy/0.log" Feb 03 11:18:48 crc kubenswrapper[5010]: I0203 11:18:48.465926 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlsql_72e88a76-8c59-4d07-813e-d7d505d14c3b/speaker/0.log" Feb 03 11:18:48 crc kubenswrapper[5010]: I0203 11:18:48.534920 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" path="/var/lib/kubelet/pods/9526d09e-786a-4d86-a688-e4afe9b32bfe/volumes" Feb 03 11:18:59 crc kubenswrapper[5010]: I0203 11:18:59.502719 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:18:59 crc kubenswrapper[5010]: E0203 11:18:59.503581 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:19:05 crc kubenswrapper[5010]: I0203 11:19:05.743405 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:05 crc kubenswrapper[5010]: I0203 11:19:05.915517 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.018706 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.052566 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.269421 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.270650 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.312350 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcxngzz_bad8c1c1-8f3a-45e1-a3c4-fa197d93d119/extract/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.511449 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.716021 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.716863 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.772605 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.938030 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/pull/0.log" Feb 03 11:19:06 crc kubenswrapper[5010]: I0203 11:19:06.992330 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/util/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.022834 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713k25hl_a64fc313-0bcd-40df-a19f-052eb0d1ce8a/extract/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.169691 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.420587 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.427506 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.668889 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.889470 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-utilities/0.log" Feb 03 11:19:07 crc kubenswrapper[5010]: I0203 11:19:07.935016 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/extract-content/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.735253 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xwfjv_499eebdd-1202-4427-bf19-7ff14c5f8507/registry-server/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.838138 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:08 crc kubenswrapper[5010]: I0203 11:19:08.990632 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.035540 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.066482 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.229162 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.254344 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.527054 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lskbc_a2eeba6d-ed26-4b5b-a7b1-dd4a5d7702fe/marketplace-operator/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.709401 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.912990 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.927613 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7dtrz_41f0db19-3c04-4062-94da-f2058d7ef64a/registry-server/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.971006 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:09 crc kubenswrapper[5010]: I0203 11:19:09.971137 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.518565 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:19:10 crc kubenswrapper[5010]: E0203 11:19:10.519017 5010 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s4xnz_openshift-machine-config-operator(e607e2ef-d3d6-4db0-b514-0d5321d9d28d)\"" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.743442 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-content/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.812397 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.862463 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/extract-utilities/0.log" Feb 03 11:19:10 crc kubenswrapper[5010]: I0203 11:19:10.886376 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96wzf_0a04fc61-013a-4515-92ca-e620b3d376d5/registry-server/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.043314 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.050064 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.062536 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.315303 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-content/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.321828 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/extract-utilities/0.log" Feb 03 11:19:11 crc kubenswrapper[5010]: I0203 11:19:11.935339 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gz7lx_1b4caad6-6b6c-452e-9be8-97e7115dbd72/registry-server/0.log" Feb 03 11:19:25 crc kubenswrapper[5010]: I0203 11:19:25.505595 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938" Feb 03 11:19:26 crc kubenswrapper[5010]: I0203 11:19:26.629492 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.736309 5010 generic.go:334] "Generic (PLEG): container finished" podID="9734985d-a674-4c92-b03c-7ca708780de2" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" exitCode=0 Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.736410 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mcw6z/must-gather-xf96m" event={"ID":"9734985d-a674-4c92-b03c-7ca708780de2","Type":"ContainerDied","Data":"1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c"} Feb 03 11:21:22 crc kubenswrapper[5010]: I0203 11:21:22.738909 5010 scope.go:117] "RemoveContainer" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" Feb 03 11:21:23 crc kubenswrapper[5010]: I0203 11:21:23.152204 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/gather/0.log" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.230709 5010 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232099 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232117 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232137 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-utilities" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232144 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-utilities" Feb 03 11:21:30 crc kubenswrapper[5010]: E0203 11:21:30.232163 5010 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-content" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232169 5010 state_mem.go:107] "Deleted CPUSet assignment" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="extract-content" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.232440 5010 memory_manager.go:354] "RemoveStaleState removing state" podUID="9526d09e-786a-4d86-a688-e4afe9b32bfe" containerName="registry-server" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.234305 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.251039 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299125 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299349 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.299649 5010 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401327 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401440 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.401562 5010 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.402187 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.402176 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.428462 5010 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"redhat-operators-vl77p\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:30 crc kubenswrapper[5010]: I0203 11:21:30.558895 5010 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.117640 5010 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841058 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" exitCode=0 Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841117 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915"} Feb 03 11:21:31 crc kubenswrapper[5010]: I0203 11:21:31.841598 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"22b60b6c17f25c7c63dfd793804cd8900a0973bf01477d86497ce7e668e61f5d"} Feb 03 11:21:33 crc kubenswrapper[5010]: I0203 11:21:33.863507 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.528593 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.529364 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mcw6z/must-gather-xf96m" podUID="9734985d-a674-4c92-b03c-7ca708780de2" containerName="copy" containerID="cri-o://10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" gracePeriod=2 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.541794 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mcw6z/must-gather-xf96m"] Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.875105 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" exitCode=0 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.875688 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.878782 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.879725 5010 generic.go:334] "Generic (PLEG): container finished" podID="9734985d-a674-4c92-b03c-7ca708780de2" containerID="10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" exitCode=143 Feb 03 11:21:34 crc kubenswrapper[5010]: I0203 11:21:34.999409 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.000194 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.113731 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") pod \"9734985d-a674-4c92-b03c-7ca708780de2\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.114143 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") pod \"9734985d-a674-4c92-b03c-7ca708780de2\" (UID: \"9734985d-a674-4c92-b03c-7ca708780de2\") " Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.124551 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c" (OuterVolumeSpecName: "kube-api-access-7lc2c") pod "9734985d-a674-4c92-b03c-7ca708780de2" (UID: "9734985d-a674-4c92-b03c-7ca708780de2"). InnerVolumeSpecName "kube-api-access-7lc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.218491 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc2c\" (UniqueName: \"kubernetes.io/projected/9734985d-a674-4c92-b03c-7ca708780de2-kube-api-access-7lc2c\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.356082 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9734985d-a674-4c92-b03c-7ca708780de2" (UID: "9734985d-a674-4c92-b03c-7ca708780de2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.424304 5010 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9734985d-a674-4c92-b03c-7ca708780de2-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.897773 5010 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mcw6z_must-gather-xf96m_9734985d-a674-4c92-b03c-7ca708780de2/copy/0.log" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.898633 5010 scope.go:117] "RemoveContainer" containerID="10474f5f43472032315addbe669cd60be39554b99965e76916b96cb1a8a1f7cb" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.898842 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mcw6z/must-gather-xf96m" Feb 03 11:21:35 crc kubenswrapper[5010]: I0203 11:21:35.954541 5010 scope.go:117] "RemoveContainer" containerID="1bb6ed59c0b4992b1aaa8c727fe9862558803252bbff9dc2431ce922cbca729c" Feb 03 11:21:36 crc kubenswrapper[5010]: I0203 11:21:36.535878 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9734985d-a674-4c92-b03c-7ca708780de2" path="/var/lib/kubelet/pods/9734985d-a674-4c92-b03c-7ca708780de2/volumes" Feb 03 11:21:36 crc kubenswrapper[5010]: I0203 11:21:36.947202 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerStarted","Data":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} Feb 03 11:21:40 crc kubenswrapper[5010]: I0203 11:21:40.559299 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:40 crc kubenswrapper[5010]: I0203 11:21:40.559947 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:41 crc kubenswrapper[5010]: I0203 11:21:41.608648 5010 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vl77p" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerName="registry-server" probeResult="failure" output=< Feb 03 11:21:41 crc kubenswrapper[5010]: timeout: failed to connect service ":50051" within 1s Feb 03 11:21:41 crc kubenswrapper[5010]: > Feb 03 11:21:46 crc kubenswrapper[5010]: I0203 11:21:46.390019 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:21:46 crc kubenswrapper[5010]: I0203 11:21:46.390628 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.614250 5010 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.637741 5010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vl77p" podStartSLOduration=16.798391357 podStartE2EDuration="20.637718272s" podCreationTimestamp="2026-02-03 11:21:30 +0000 UTC" firstStartedPulling="2026-02-03 11:21:31.843394639 +0000 UTC m=+4761.999370768" lastFinishedPulling="2026-02-03 11:21:35.682721554 +0000 UTC m=+4765.838697683" observedRunningTime="2026-02-03 11:21:36.995270305 +0000 UTC m=+4767.151246444" watchObservedRunningTime="2026-02-03 11:21:50.637718272 +0000 UTC m=+4780.793694391" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.669764 5010 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:50 crc kubenswrapper[5010]: I0203 11:21:50.867166 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:52 crc kubenswrapper[5010]: I0203 11:21:52.520290 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vl77p" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerName="registry-server" containerID="cri-o://1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" gracePeriod=2 Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.165640 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180734 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180878 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.180957 5010 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") pod \"1c52790b-6f85-4186-8264-58e7e9cecb86\" (UID: \"1c52790b-6f85-4186-8264-58e7e9cecb86\") " Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.181958 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities" (OuterVolumeSpecName: "utilities") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.200319 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr" (OuterVolumeSpecName: "kube-api-access-7c4wr") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "kube-api-access-7c4wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.283012 5010 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.283059 5010 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4wr\" (UniqueName: \"kubernetes.io/projected/1c52790b-6f85-4186-8264-58e7e9cecb86-kube-api-access-7c4wr\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.348693 5010 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c52790b-6f85-4186-8264-58e7e9cecb86" (UID: "1c52790b-6f85-4186-8264-58e7e9cecb86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.385379 5010 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c52790b-6f85-4186-8264-58e7e9cecb86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.535966 5010 generic.go:334] "Generic (PLEG): container finished" podID="1c52790b-6f85-4186-8264-58e7e9cecb86" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" exitCode=0 Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536037 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536115 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vl77p" event={"ID":"1c52790b-6f85-4186-8264-58e7e9cecb86","Type":"ContainerDied","Data":"22b60b6c17f25c7c63dfd793804cd8900a0973bf01477d86497ce7e668e61f5d"} Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536128 5010 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vl77p" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.536167 5010 scope.go:117] "RemoveContainer" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.563162 5010 scope.go:117] "RemoveContainer" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.589512 5010 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.599366 5010 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vl77p"] Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.611650 5010 scope.go:117] "RemoveContainer" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.650887 5010 scope.go:117] "RemoveContainer" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.651391 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": container with ID starting with 1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367 not found: ID does not exist" containerID="1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651430 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367"} err="failed to get container status \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": rpc error: code = NotFound desc = could not find container \"1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367\": container with ID starting with 1c82e4e381d6c0ef51486c1d913b23ce5ad7962414a88f2152cd133d93a40367 not found: ID does not exist" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651456 5010 scope.go:117] "RemoveContainer" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.651709 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": container with ID starting with a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296 not found: ID does not exist" containerID="a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651735 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296"} err="failed to get container status \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": rpc error: code = NotFound desc = could not find container \"a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296\": container with ID starting with a8e93f33da85b2b82e2dd1fdd9480472833652a9ae679af53214e2d67b135296 not found: ID does not exist" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.651766 5010 scope.go:117] "RemoveContainer" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: E0203 11:21:53.652068 5010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": container with ID starting with fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915 not found: ID does not exist" containerID="fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915" Feb 03 11:21:53 crc kubenswrapper[5010]: I0203 11:21:53.652102 5010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915"} err="failed to get container status \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": rpc error: code = NotFound desc = could not find container \"fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915\": container with ID starting with fed246cf15bd9f897eb00ee6c7dd755f4bdf771a34fb20fa112191cbcb22d915 not found: ID does not exist" Feb 03 11:21:54 crc kubenswrapper[5010]: I0203 11:21:54.526210 5010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c52790b-6f85-4186-8264-58e7e9cecb86" path="/var/lib/kubelet/pods/1c52790b-6f85-4186-8264-58e7e9cecb86/volumes" Feb 03 11:22:16 crc kubenswrapper[5010]: I0203 11:22:16.390698 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:22:16 crc kubenswrapper[5010]: I0203 11:22:16.391661 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.390443 5010 patch_prober.go:28] interesting pod/machine-config-daemon-s4xnz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.390967 5010 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.391033 5010 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.392044 5010 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 11:22:46 crc kubenswrapper[5010]: I0203 11:22:46.392121 5010 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" podUID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerName="machine-config-daemon" containerID="cri-o://498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48" gracePeriod=600 Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.131715 5010 generic.go:334] "Generic (PLEG): container finished" podID="e607e2ef-d3d6-4db0-b514-0d5321d9d28d" containerID="498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48" exitCode=0 Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.131832 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerDied","Data":"498da426eb755a9dc8fd80e2d0fdf6de3005068e582a4256ebdaa141ac61bf48"} Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.133037 5010 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s4xnz" event={"ID":"e607e2ef-d3d6-4db0-b514-0d5321d9d28d","Type":"ContainerStarted","Data":"204b51e4d5b74a8157191003f28432d43c32c9430018526b50e2bb5e62e1873a"} Feb 03 11:22:47 crc kubenswrapper[5010]: I0203 11:22:47.133071 5010 scope.go:117] "RemoveContainer" containerID="016a1c423d445be3d994e74fc0273a19252cb582e461796e14e648b35e1b4938"